Test Report: QEMU_macOS 19736

                    
                      c03ccee26a80b9ecde7f622e8f7f7412408a7b8a:2024-10-01:36456
                    
                

Test fail (100/273)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 41.54
7 TestDownloadOnly/v1.20.0/kubectl 0
21 TestBinaryMirror 0.27
22 TestOffline 10.3
45 TestCertOptions 10.29
46 TestCertExpiration 195.38
47 TestDockerFlags 10.11
48 TestForceSystemdFlag 10.28
49 TestForceSystemdEnv 11
94 TestFunctional/parallel/ServiceCmdConnect 32.8
166 TestMultiControlPlane/serial/StopSecondaryNode 64.18
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 51.93
168 TestMultiControlPlane/serial/RestartSecondaryNode 87.09
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 5.61
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 234.39
171 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.08
173 TestMultiControlPlane/serial/StopCluster 202.08
174 TestMultiControlPlane/serial/RestartCluster 5.25
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
176 TestMultiControlPlane/serial/AddSecondaryNode 0.07
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.08
180 TestImageBuild/serial/Setup 9.9
183 TestJSONOutput/start/Command 9.92
189 TestJSONOutput/pause/Command 0.08
195 TestJSONOutput/unpause/Command 0.04
212 TestMinikubeProfile 10.14
215 TestMountStart/serial/StartWithMountFirst 10.62
218 TestMultiNode/serial/FreshStart2Nodes 10.15
219 TestMultiNode/serial/DeployApp2Nodes 80.98
220 TestMultiNode/serial/PingHostFrom2Pods 0.09
221 TestMultiNode/serial/AddNode 0.07
222 TestMultiNode/serial/MultiNodeLabels 0.06
223 TestMultiNode/serial/ProfileList 0.08
224 TestMultiNode/serial/CopyFile 0.06
225 TestMultiNode/serial/StopNode 0.14
226 TestMultiNode/serial/StartAfterStop 46.28
227 TestMultiNode/serial/RestartKeepsNodes 7.21
228 TestMultiNode/serial/DeleteNode 0.1
229 TestMultiNode/serial/StopMultiNode 2.24
230 TestMultiNode/serial/RestartMultiNode 5.25
231 TestMultiNode/serial/ValidateNameConflict 20.02
235 TestPreload 10.19
237 TestScheduledStopUnix 10.3
238 TestSkaffold 16.59
241 TestRunningBinaryUpgrade 621.75
243 TestKubernetesUpgrade 18.69
256 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.49
257 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.13
259 TestStoppedBinaryUpgrade/Upgrade 582.77
261 TestPause/serial/Start 10.2
271 TestNoKubernetes/serial/StartWithK8s 9.98
272 TestNoKubernetes/serial/StartWithStopK8s 5.86
273 TestNoKubernetes/serial/Start 5.88
277 TestNoKubernetes/serial/StartNoArgs 5.84
279 TestNetworkPlugins/group/auto/Start 9.91
280 TestNetworkPlugins/group/kindnet/Start 9.88
281 TestNetworkPlugins/group/calico/Start 9.87
282 TestNetworkPlugins/group/custom-flannel/Start 9.84
283 TestNetworkPlugins/group/false/Start 9.79
284 TestNetworkPlugins/group/enable-default-cni/Start 9.83
285 TestNetworkPlugins/group/flannel/Start 9.96
286 TestNetworkPlugins/group/bridge/Start 9.84
288 TestNetworkPlugins/group/kubenet/Start 9.92
290 TestStartStop/group/old-k8s-version/serial/FirstStart 9.89
291 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
292 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
295 TestStartStop/group/old-k8s-version/serial/SecondStart 5.27
297 TestStartStop/group/no-preload/serial/FirstStart 10.04
298 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
299 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
300 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
301 TestStartStop/group/old-k8s-version/serial/Pause 0.1
303 TestStartStop/group/embed-certs/serial/FirstStart 9.96
304 TestStartStop/group/no-preload/serial/DeployApp 0.09
305 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
308 TestStartStop/group/no-preload/serial/SecondStart 5.94
309 TestStartStop/group/embed-certs/serial/DeployApp 0.09
310 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
313 TestStartStop/group/embed-certs/serial/SecondStart 5.28
314 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
315 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
316 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
317 TestStartStop/group/no-preload/serial/Pause 0.1
319 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.98
320 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
321 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
322 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
323 TestStartStop/group/embed-certs/serial/Pause 0.1
325 TestStartStop/group/newest-cni/serial/FirstStart 10.07
326 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
327 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
330 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 6.33
335 TestStartStop/group/newest-cni/serial/SecondStart 5.26
336 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
337 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
338 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
339 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
342 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
343 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (41.54s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-368000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-368000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (41.542313416s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"61ea54bc-134d-4b54-961b-fc8766fa69a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-368000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d1a399f6-def4-473a-b59f-306ddd326347","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19736"}}
	{"specversion":"1.0","id":"198a31b2-3b28-47f7-b744-a4b2697e082a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig"}}
	{"specversion":"1.0","id":"f13621b3-7218-4aba-baad-f47325646170","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"2aaac713-df0b-43b9-89a7-0d2c93db19af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d36bab81-97c8-45a0-aba4-cd35924d26a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube"}}
	{"specversion":"1.0","id":"cf30a634-4176-400c-be8c-224d77948a5a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"802bb137-5c4a-4838-91d8-9dd32f606029","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"eec90af5-9777-43a6-9fba-26067f9abbc1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"6269db83-e99a-4219-ae1f-6ed3a752b7e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"f0042a0d-6627-480e-9e15-cdfae1373a88","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-368000\" primary control-plane node in \"download-only-368000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"77600a16-46d3-4f0a-91da-1a8aee0c87dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"7c6befce-7dda-4585-93a5-6dfe4f194fb2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19736-1073/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1088c96c0 0x1088c96c0 0x1088c96c0 0x1088c96c0 0x1088c96c0 0x1088c96c0 0x1088c96c0] Decompressors:map[bz2:0x1400059f710 gz:0x1400059f718 tar:0x1400059f6c0 tar.bz2:0x1400059f6d0 tar.gz:0x1400059f6e0 tar.xz:0x1400059f6f0 tar.zst:0x1400059f700 tbz2:0x1400059f6d0 tgz:0x14
00059f6e0 txz:0x1400059f6f0 tzst:0x1400059f700 xz:0x1400059f720 zip:0x1400059f730 zst:0x1400059f728] Getters:map[file:0x140014b25c0 http:0x140000b8140 https:0x140000b8190] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"7db96858-9ad8-410a-80ae-f00e053acbf2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 11:46:09.291925    1596 out.go:345] Setting OutFile to fd 1 ...
	I1001 11:46:09.292083    1596 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 11:46:09.292086    1596 out.go:358] Setting ErrFile to fd 2...
	I1001 11:46:09.292089    1596 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 11:46:09.292209    1596 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	W1001 11:46:09.292302    1596 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19736-1073/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19736-1073/.minikube/config/config.json: no such file or directory
	I1001 11:46:09.293528    1596 out.go:352] Setting JSON to true
	I1001 11:46:09.310662    1596 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":934,"bootTime":1727807435,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1001 11:46:09.310726    1596 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 11:46:09.315277    1596 out.go:97] [download-only-368000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 11:46:09.315443    1596 notify.go:220] Checking for updates...
	W1001 11:46:09.315505    1596 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball: no such file or directory
	I1001 11:46:09.319015    1596 out.go:169] MINIKUBE_LOCATION=19736
	I1001 11:46:09.326102    1596 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	I1001 11:46:09.330066    1596 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 11:46:09.334013    1596 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 11:46:09.337060    1596 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	W1001 11:46:09.343017    1596 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1001 11:46:09.343294    1596 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 11:46:09.349103    1596 out.go:97] Using the qemu2 driver based on user configuration
	I1001 11:46:09.349123    1596 start.go:297] selected driver: qemu2
	I1001 11:46:09.349139    1596 start.go:901] validating driver "qemu2" against <nil>
	I1001 11:46:09.349210    1596 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 11:46:09.350879    1596 out.go:169] Automatically selected the socket_vmnet network
	I1001 11:46:09.356630    1596 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1001 11:46:09.356759    1596 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1001 11:46:09.356814    1596 cni.go:84] Creating CNI manager for ""
	I1001 11:46:09.356849    1596 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1001 11:46:09.356895    1596 start.go:340] cluster config:
	{Name:download-only-368000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-368000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 11:46:09.362247    1596 iso.go:125] acquiring lock: {Name:mk749d3a5db31c259cbd6465e91cf5073e7cc750 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 11:46:09.367081    1596 out.go:97] Downloading VM boot image ...
	I1001 11:46:09.367098    1596 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso
	I1001 11:46:27.714087    1596 out.go:97] Starting "download-only-368000" primary control-plane node in "download-only-368000" cluster
	I1001 11:46:27.714111    1596 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1001 11:46:28.005233    1596 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1001 11:46:28.005335    1596 cache.go:56] Caching tarball of preloaded images
	I1001 11:46:28.006156    1596 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1001 11:46:28.013150    1596 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1001 11:46:28.013180    1596 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1001 11:46:28.619724    1596 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1001 11:46:49.225858    1596 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1001 11:46:49.226036    1596 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1001 11:46:49.932157    1596 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1001 11:46:49.932372    1596 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/download-only-368000/config.json ...
	I1001 11:46:49.932391    1596 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/download-only-368000/config.json: {Name:mk9628911aba49ea32a809a43c6ae648f373b516 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 11:46:49.932731    1596 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1001 11:46:49.932938    1596 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I1001 11:46:50.771966    1596 out.go:193] 
	W1001 11:46:50.780030    1596 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19736-1073/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1088c96c0 0x1088c96c0 0x1088c96c0 0x1088c96c0 0x1088c96c0 0x1088c96c0 0x1088c96c0] Decompressors:map[bz2:0x1400059f710 gz:0x1400059f718 tar:0x1400059f6c0 tar.bz2:0x1400059f6d0 tar.gz:0x1400059f6e0 tar.xz:0x1400059f6f0 tar.zst:0x1400059f700 tbz2:0x1400059f6d0 tgz:0x1400059f6e0 txz:0x1400059f6f0 tzst:0x1400059f700 xz:0x1400059f720 zip:0x1400059f730 zst:0x1400059f728] Getters:map[file:0x140014b25c0 http:0x140000b8140 https:0x140000b8190] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W1001 11:46:50.780078    1596 out_reason.go:110] 
	W1001 11:46:50.788900    1596 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 11:46:50.793820    1596 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-368000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (41.54s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19736-1073/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestBinaryMirror (0.27s)

                                                
                                                
=== RUN   TestBinaryMirror
I1001 11:47:10.213939    1595 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-337000 --alsologtostderr --binary-mirror http://127.0.0.1:49314 --driver=qemu2 
aaa_download_only_test.go:314: (dbg) Non-zero exit: out/minikube-darwin-arm64 start --download-only -p binary-mirror-337000 --alsologtostderr --binary-mirror http://127.0.0.1:49314 --driver=qemu2 : exit status 40 (171.001292ms)

                                                
                                                
-- stdout --
	* [binary-mirror-337000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "binary-mirror-337000" primary control-plane node in "binary-mirror-337000" cluster
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 11:47:10.272833    1663 out.go:345] Setting OutFile to fd 1 ...
	I1001 11:47:10.272952    1663 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 11:47:10.272955    1663 out.go:358] Setting ErrFile to fd 2...
	I1001 11:47:10.272957    1663 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 11:47:10.273087    1663 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 11:47:10.274133    1663 out.go:352] Setting JSON to false
	I1001 11:47:10.290123    1663 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":995,"bootTime":1727807435,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1001 11:47:10.290199    1663 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 11:47:10.294613    1663 out.go:177] * [binary-mirror-337000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 11:47:10.306486    1663 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 11:47:10.306542    1663 notify.go:220] Checking for updates...
	I1001 11:47:10.314442    1663 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	I1001 11:47:10.318481    1663 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 11:47:10.321512    1663 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 11:47:10.324493    1663 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	I1001 11:47:10.327662    1663 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 11:47:10.331504    1663 out.go:177] * Using the qemu2 driver based on user configuration
	I1001 11:47:10.338434    1663 start.go:297] selected driver: qemu2
	I1001 11:47:10.338439    1663 start.go:901] validating driver "qemu2" against <nil>
	I1001 11:47:10.338487    1663 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 11:47:10.341487    1663 out.go:177] * Automatically selected the socket_vmnet network
	I1001 11:47:10.346554    1663 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1001 11:47:10.346643    1663 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1001 11:47:10.346664    1663 cni.go:84] Creating CNI manager for ""
	I1001 11:47:10.346693    1663 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 11:47:10.346700    1663 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1001 11:47:10.346734    1663 start.go:340] cluster config:
	{Name:binary-mirror-337000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:binary-mirror-337000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:http://127.0.0.1:49314 DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket
_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 11:47:10.350240    1663 iso.go:125] acquiring lock: {Name:mk749d3a5db31c259cbd6465e91cf5073e7cc750 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 11:47:10.358435    1663 out.go:177] * Starting "binary-mirror-337000" primary control-plane node in "binary-mirror-337000" cluster
	I1001 11:47:10.362430    1663 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 11:47:10.362446    1663 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1001 11:47:10.362458    1663 cache.go:56] Caching tarball of preloaded images
	I1001 11:47:10.362553    1663 preload.go:172] Found /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 11:47:10.362559    1663 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1001 11:47:10.362789    1663 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/binary-mirror-337000/config.json ...
	I1001 11:47:10.362800    1663 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/binary-mirror-337000/config.json: {Name:mk89c66a76a9d8c505fe3a6920fafef28eed7384 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 11:47:10.363172    1663 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 11:47:10.363242    1663 download.go:107] Downloading: http://127.0.0.1:49314/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49314/v1.31.1/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/darwin/arm64/v1.31.1/kubectl
	I1001 11:47:10.393348    1663 out.go:201] 
	W1001 11:47:10.397540    1663 out.go:270] X Exiting due to INET_CACHE_KUBECTL: Failed to cache kubectl: download failed: http://127.0.0.1:49314/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49314/v1.31.1/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:http://127.0.0.1:49314/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49314/v1.31.1/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19736-1073/.minikube/cache/darwin/arm64/v1.31.1/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1073156c0 0x1073156c0 0x1073156c0 0x1073156c0 0x1073156c0 0x1073156c0 0x1073156c0] Decompressors:map[bz2:0x140003f2160 gz:0x140003f2168 tar:0x140003f20b0 tar.bz2:0x140003f20c0 tar.gz:0x140003f20d0 tar.xz:0x140003f2140 tar.zst:0x140003f2150 tbz2:0x140003f20c0 tgz:0x140003f20d0 txz:0x140003f2140 tzst:0x140003f2150 xz:0x140003f2170 zip:0x140003f21a0 zst:0x140003f2178] Getters:map[file:0x14000482fa0 http:0x1400082b130 https:0x1400082b180] Dir:
false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: unexpected EOF
	X Exiting due to INET_CACHE_KUBECTL: Failed to cache kubectl: download failed: http://127.0.0.1:49314/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49314/v1.31.1/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:http://127.0.0.1:49314/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49314/v1.31.1/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19736-1073/.minikube/cache/darwin/arm64/v1.31.1/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1073156c0 0x1073156c0 0x1073156c0 0x1073156c0 0x1073156c0 0x1073156c0 0x1073156c0] Decompressors:map[bz2:0x140003f2160 gz:0x140003f2168 tar:0x140003f20b0 tar.bz2:0x140003f20c0 tar.gz:0x140003f20d0 tar.xz:0x140003f2140 tar.zst:0x140003f2150 tbz2:0x140003f20c0 tgz:0x140003f20d0 txz:0x140003f2140 tzst:0x140003f2150 xz:0x140003f2170 zip:0x140003f21a0 zst:0x140003f2178] Getters:map[file:0x14000482fa0 http:0x1400082b130 https:0x1400082b180] Dir:false ProgressListener:<nil> Insecure:fals
e DisableSymlinks:false Options:[]}: unexpected EOF
	W1001 11:47:10.397551    1663 out.go:270] * 
	* 
	W1001 11:47:10.398129    1663 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 11:47:10.411476    1663 out.go:201] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:315: start with --binary-mirror failed ["start" "--download-only" "-p" "binary-mirror-337000" "--alsologtostderr" "--binary-mirror" "http://127.0.0.1:49314" "--driver=qemu2" ""] : exit status 40
helpers_test.go:175: Cleaning up "binary-mirror-337000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-337000
--- FAIL: TestBinaryMirror (0.27s)

                                                
                                    
x
+
TestOffline (10.3s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-069000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-069000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (10.143083375s)

                                                
                                                
-- stdout --
	* [offline-docker-069000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-069000" primary control-plane node in "offline-docker-069000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-069000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:27:12.588931    3949 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:27:12.589080    3949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:27:12.589086    3949 out.go:358] Setting ErrFile to fd 2...
	I1001 12:27:12.589088    3949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:27:12.589201    3949 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:27:12.590402    3949 out.go:352] Setting JSON to false
	I1001 12:27:12.608043    3949 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3397,"bootTime":1727807435,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1001 12:27:12.608118    3949 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 12:27:12.615274    3949 out.go:177] * [offline-docker-069000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 12:27:12.624211    3949 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 12:27:12.624258    3949 notify.go:220] Checking for updates...
	I1001 12:27:12.631079    3949 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	I1001 12:27:12.634116    3949 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 12:27:12.635405    3949 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 12:27:12.638115    3949 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	I1001 12:27:12.641128    3949 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 12:27:12.644736    3949 config.go:182] Loaded profile config "multinode-301000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:27:12.644797    3949 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 12:27:12.649051    3949 out.go:177] * Using the qemu2 driver based on user configuration
	I1001 12:27:12.656104    3949 start.go:297] selected driver: qemu2
	I1001 12:27:12.656115    3949 start.go:901] validating driver "qemu2" against <nil>
	I1001 12:27:12.656123    3949 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 12:27:12.657993    3949 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 12:27:12.661070    3949 out.go:177] * Automatically selected the socket_vmnet network
	I1001 12:27:12.664193    3949 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 12:27:12.664210    3949 cni.go:84] Creating CNI manager for ""
	I1001 12:27:12.664239    3949 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 12:27:12.664246    3949 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1001 12:27:12.664284    3949 start.go:340] cluster config:
	{Name:offline-docker-069000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-069000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/b
in/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 12:27:12.667875    3949 iso.go:125] acquiring lock: {Name:mk749d3a5db31c259cbd6465e91cf5073e7cc750 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:27:12.675136    3949 out.go:177] * Starting "offline-docker-069000" primary control-plane node in "offline-docker-069000" cluster
	I1001 12:27:12.679103    3949 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 12:27:12.679132    3949 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1001 12:27:12.679143    3949 cache.go:56] Caching tarball of preloaded images
	I1001 12:27:12.679213    3949 preload.go:172] Found /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 12:27:12.679218    3949 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1001 12:27:12.679289    3949 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/offline-docker-069000/config.json ...
	I1001 12:27:12.679300    3949 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/offline-docker-069000/config.json: {Name:mkfa09fddaa7bd2c0e1818a72341b45f0fd19367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 12:27:12.679651    3949 start.go:360] acquireMachinesLock for offline-docker-069000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:27:12.679687    3949 start.go:364] duration metric: took 27.958µs to acquireMachinesLock for "offline-docker-069000"
	I1001 12:27:12.679698    3949 start.go:93] Provisioning new machine with config: &{Name:offline-docker-069000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-069000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 12:27:12.679728    3949 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 12:27:12.684058    3949 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1001 12:27:12.700113    3949 start.go:159] libmachine.API.Create for "offline-docker-069000" (driver="qemu2")
	I1001 12:27:12.700148    3949 client.go:168] LocalClient.Create starting
	I1001 12:27:12.700219    3949 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem
	I1001 12:27:12.700251    3949 main.go:141] libmachine: Decoding PEM data...
	I1001 12:27:12.700260    3949 main.go:141] libmachine: Parsing certificate...
	I1001 12:27:12.700304    3949 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/cert.pem
	I1001 12:27:12.700337    3949 main.go:141] libmachine: Decoding PEM data...
	I1001 12:27:12.700345    3949 main.go:141] libmachine: Parsing certificate...
	I1001 12:27:12.700703    3949 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19736-1073/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 12:27:12.866307    3949 main.go:141] libmachine: Creating SSH key...
	I1001 12:27:13.006781    3949 main.go:141] libmachine: Creating Disk image...
	I1001 12:27:13.006792    3949 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 12:27:13.006986    3949 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/offline-docker-069000/disk.qcow2.raw /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/offline-docker-069000/disk.qcow2
	I1001 12:27:13.016576    3949 main.go:141] libmachine: STDOUT: 
	I1001 12:27:13.016594    3949 main.go:141] libmachine: STDERR: 
	I1001 12:27:13.016663    3949 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/offline-docker-069000/disk.qcow2 +20000M
	I1001 12:27:13.025335    3949 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 12:27:13.025365    3949 main.go:141] libmachine: STDERR: 
	I1001 12:27:13.025388    3949 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/offline-docker-069000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/offline-docker-069000/disk.qcow2
	I1001 12:27:13.025393    3949 main.go:141] libmachine: Starting QEMU VM...
	I1001 12:27:13.025406    3949 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:27:13.025434    3949 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/offline-docker-069000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/offline-docker-069000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/offline-docker-069000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:a3:32:ea:96:a2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/offline-docker-069000/disk.qcow2
	I1001 12:27:13.027335    3949 main.go:141] libmachine: STDOUT: 
	I1001 12:27:13.027349    3949 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:27:13.027368    3949 client.go:171] duration metric: took 327.220666ms to LocalClient.Create
	I1001 12:27:15.029405    3949 start.go:128] duration metric: took 2.3497175s to createHost
	I1001 12:27:15.029428    3949 start.go:83] releasing machines lock for "offline-docker-069000", held for 2.349784917s
	W1001 12:27:15.029462    3949 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:27:15.035091    3949 out.go:177] * Deleting "offline-docker-069000" in qemu2 ...
	W1001 12:27:15.054394    3949 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:27:15.054408    3949 start.go:729] Will try again in 5 seconds ...
	I1001 12:27:20.054592    3949 start.go:360] acquireMachinesLock for offline-docker-069000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:27:20.055047    3949 start.go:364] duration metric: took 315.042µs to acquireMachinesLock for "offline-docker-069000"
	I1001 12:27:20.055193    3949 start.go:93] Provisioning new machine with config: &{Name:offline-docker-069000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-069000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 12:27:20.055541    3949 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 12:27:20.069212    3949 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1001 12:27:20.116541    3949 start.go:159] libmachine.API.Create for "offline-docker-069000" (driver="qemu2")
	I1001 12:27:20.116605    3949 client.go:168] LocalClient.Create starting
	I1001 12:27:20.116749    3949 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem
	I1001 12:27:20.116816    3949 main.go:141] libmachine: Decoding PEM data...
	I1001 12:27:20.116834    3949 main.go:141] libmachine: Parsing certificate...
	I1001 12:27:20.116903    3949 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/cert.pem
	I1001 12:27:20.116947    3949 main.go:141] libmachine: Decoding PEM data...
	I1001 12:27:20.116962    3949 main.go:141] libmachine: Parsing certificate...
	I1001 12:27:20.117489    3949 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19736-1073/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 12:27:20.287253    3949 main.go:141] libmachine: Creating SSH key...
	I1001 12:27:20.636150    3949 main.go:141] libmachine: Creating Disk image...
	I1001 12:27:20.636160    3949 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 12:27:20.636357    3949 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/offline-docker-069000/disk.qcow2.raw /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/offline-docker-069000/disk.qcow2
	I1001 12:27:20.645704    3949 main.go:141] libmachine: STDOUT: 
	I1001 12:27:20.645718    3949 main.go:141] libmachine: STDERR: 
	I1001 12:27:20.645780    3949 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/offline-docker-069000/disk.qcow2 +20000M
	I1001 12:27:20.653586    3949 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 12:27:20.653598    3949 main.go:141] libmachine: STDERR: 
	I1001 12:27:20.653612    3949 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/offline-docker-069000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/offline-docker-069000/disk.qcow2
	I1001 12:27:20.653617    3949 main.go:141] libmachine: Starting QEMU VM...
	I1001 12:27:20.653629    3949 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:27:20.653662    3949 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/offline-docker-069000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/offline-docker-069000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/offline-docker-069000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:78:95:bd:b0:01 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/offline-docker-069000/disk.qcow2
	I1001 12:27:20.655184    3949 main.go:141] libmachine: STDOUT: 
	I1001 12:27:20.655195    3949 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:27:20.655210    3949 client.go:171] duration metric: took 538.610333ms to LocalClient.Create
	I1001 12:27:22.655641    3949 start.go:128] duration metric: took 2.600057333s to createHost
	I1001 12:27:22.655750    3949 start.go:83] releasing machines lock for "offline-docker-069000", held for 2.600735s
	W1001 12:27:22.656422    3949 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-069000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-069000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:27:22.664744    3949 out.go:201] 
	W1001 12:27:22.676808    3949 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 12:27:22.676868    3949 out.go:270] * 
	* 
	W1001 12:27:22.679615    3949 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 12:27:22.688648    3949 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-069000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:629: *** TestOffline FAILED at 2024-10-01 12:27:22.704793 -0700 PDT m=+2473.546185085
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-069000 -n offline-docker-069000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-069000 -n offline-docker-069000: exit status 7 (67.156166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-069000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-069000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-069000
--- FAIL: TestOffline (10.30s)

                                                
                                    
x
+
TestCertOptions (10.29s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-867000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-867000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (10.025152417s)

                                                
                                                
-- stdout --
	* [cert-options-867000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-867000" primary control-plane node in "cert-options-867000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-867000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-867000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-867000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-867000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-867000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (77.328916ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-867000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-867000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-867000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-867000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-867000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-867000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (38.758292ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-867000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-867000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-867000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-867000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-867000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-10-01 12:27:54.13493 -0700 PDT m=+2504.976971001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-867000 -n cert-options-867000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-867000 -n cert-options-867000: exit status 7 (29.154125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-867000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-867000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-867000
--- FAIL: TestCertOptions (10.29s)

                                                
                                    
x
+
TestCertExpiration (195.38s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-211000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-211000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (10.042380875s)

                                                
                                                
-- stdout --
	* [cert-expiration-211000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-211000" primary control-plane node in "cert-expiration-211000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-211000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-211000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-211000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-211000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
E1001 12:30:52.103649    1595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/functional-755000/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-211000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.184879125s)

                                                
                                                
-- stdout --
	* [cert-expiration-211000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-211000" primary control-plane node in "cert-expiration-211000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-211000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-211000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-211000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-211000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-211000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-211000" primary control-plane node in "cert-expiration-211000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-211000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-211000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-211000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-10-01 12:30:54.023525 -0700 PDT m=+2684.869280751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-211000 -n cert-expiration-211000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-211000 -n cert-expiration-211000: exit status 7 (68.331917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-211000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-211000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-211000
--- FAIL: TestCertExpiration (195.38s)

                                                
                                    
x
+
TestDockerFlags (10.11s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-780000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-780000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.871093334s)

                                                
                                                
-- stdout --
	* [docker-flags-780000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-780000" primary control-plane node in "docker-flags-780000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-780000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:27:33.881374    4137 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:27:33.881515    4137 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:27:33.881518    4137 out.go:358] Setting ErrFile to fd 2...
	I1001 12:27:33.881520    4137 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:27:33.881666    4137 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:27:33.882792    4137 out.go:352] Setting JSON to false
	I1001 12:27:33.898776    4137 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3418,"bootTime":1727807435,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1001 12:27:33.898849    4137 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 12:27:33.905077    4137 out.go:177] * [docker-flags-780000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 12:27:33.912946    4137 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 12:27:33.913008    4137 notify.go:220] Checking for updates...
	I1001 12:27:33.919874    4137 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	I1001 12:27:33.922896    4137 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 12:27:33.925895    4137 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 12:27:33.928885    4137 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	I1001 12:27:33.931895    4137 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 12:27:33.935278    4137 config.go:182] Loaded profile config "force-systemd-flag-155000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:27:33.935345    4137 config.go:182] Loaded profile config "multinode-301000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:27:33.935395    4137 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 12:27:33.939882    4137 out.go:177] * Using the qemu2 driver based on user configuration
	I1001 12:27:33.946881    4137 start.go:297] selected driver: qemu2
	I1001 12:27:33.946888    4137 start.go:901] validating driver "qemu2" against <nil>
	I1001 12:27:33.946895    4137 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 12:27:33.948998    4137 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 12:27:33.951817    4137 out.go:177] * Automatically selected the socket_vmnet network
	I1001 12:27:33.954918    4137 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I1001 12:27:33.954934    4137 cni.go:84] Creating CNI manager for ""
	I1001 12:27:33.954961    4137 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 12:27:33.954971    4137 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1001 12:27:33.954998    4137 start.go:340] cluster config:
	{Name:docker-flags-780000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-780000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVM
netClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 12:27:33.958499    4137 iso.go:125] acquiring lock: {Name:mk749d3a5db31c259cbd6465e91cf5073e7cc750 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:27:33.965847    4137 out.go:177] * Starting "docker-flags-780000" primary control-plane node in "docker-flags-780000" cluster
	I1001 12:27:33.969864    4137 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 12:27:33.969880    4137 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1001 12:27:33.969894    4137 cache.go:56] Caching tarball of preloaded images
	I1001 12:27:33.969964    4137 preload.go:172] Found /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 12:27:33.969975    4137 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1001 12:27:33.970040    4137 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/docker-flags-780000/config.json ...
	I1001 12:27:33.970052    4137 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/docker-flags-780000/config.json: {Name:mkd3c8a1ef7049c383a0abc3f09532709615ef00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 12:27:33.970290    4137 start.go:360] acquireMachinesLock for docker-flags-780000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:27:33.970328    4137 start.go:364] duration metric: took 30.667µs to acquireMachinesLock for "docker-flags-780000"
	I1001 12:27:33.970343    4137 start.go:93] Provisioning new machine with config: &{Name:docker-flags-780000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKe
y: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-780000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 12:27:33.970373    4137 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 12:27:33.978896    4137 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1001 12:27:33.997608    4137 start.go:159] libmachine.API.Create for "docker-flags-780000" (driver="qemu2")
	I1001 12:27:33.997636    4137 client.go:168] LocalClient.Create starting
	I1001 12:27:33.997714    4137 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem
	I1001 12:27:33.997745    4137 main.go:141] libmachine: Decoding PEM data...
	I1001 12:27:33.997755    4137 main.go:141] libmachine: Parsing certificate...
	I1001 12:27:33.997808    4137 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/cert.pem
	I1001 12:27:33.997833    4137 main.go:141] libmachine: Decoding PEM data...
	I1001 12:27:33.997841    4137 main.go:141] libmachine: Parsing certificate...
	I1001 12:27:33.998291    4137 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19736-1073/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 12:27:34.158535    4137 main.go:141] libmachine: Creating SSH key...
	I1001 12:27:34.186387    4137 main.go:141] libmachine: Creating Disk image...
	I1001 12:27:34.186392    4137 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 12:27:34.186599    4137 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/docker-flags-780000/disk.qcow2.raw /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/docker-flags-780000/disk.qcow2
	I1001 12:27:34.195805    4137 main.go:141] libmachine: STDOUT: 
	I1001 12:27:34.195821    4137 main.go:141] libmachine: STDERR: 
	I1001 12:27:34.195884    4137 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/docker-flags-780000/disk.qcow2 +20000M
	I1001 12:27:34.203836    4137 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 12:27:34.203860    4137 main.go:141] libmachine: STDERR: 
	I1001 12:27:34.203878    4137 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/docker-flags-780000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/docker-flags-780000/disk.qcow2
	I1001 12:27:34.203889    4137 main.go:141] libmachine: Starting QEMU VM...
	I1001 12:27:34.203900    4137 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:27:34.203928    4137 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/docker-flags-780000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/docker-flags-780000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/docker-flags-780000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:49:9a:7b:96:c8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/docker-flags-780000/disk.qcow2
	I1001 12:27:34.205720    4137 main.go:141] libmachine: STDOUT: 
	I1001 12:27:34.205736    4137 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:27:34.205757    4137 client.go:171] duration metric: took 208.118083ms to LocalClient.Create
	I1001 12:27:36.206610    4137 start.go:128] duration metric: took 2.236257125s to createHost
	I1001 12:27:36.206694    4137 start.go:83] releasing machines lock for "docker-flags-780000", held for 2.236401042s
	W1001 12:27:36.206809    4137 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:27:36.225078    4137 out.go:177] * Deleting "docker-flags-780000" in qemu2 ...
	W1001 12:27:36.266086    4137 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:27:36.266113    4137 start.go:729] Will try again in 5 seconds ...
	I1001 12:27:41.268245    4137 start.go:360] acquireMachinesLock for docker-flags-780000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:27:41.339785    4137 start.go:364] duration metric: took 71.44825ms to acquireMachinesLock for "docker-flags-780000"
	I1001 12:27:41.339941    4137 start.go:93] Provisioning new machine with config: &{Name:docker-flags-780000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKe
y: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-780000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 12:27:41.340193    4137 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 12:27:41.355752    4137 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1001 12:27:41.404139    4137 start.go:159] libmachine.API.Create for "docker-flags-780000" (driver="qemu2")
	I1001 12:27:41.404191    4137 client.go:168] LocalClient.Create starting
	I1001 12:27:41.404332    4137 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem
	I1001 12:27:41.404401    4137 main.go:141] libmachine: Decoding PEM data...
	I1001 12:27:41.404417    4137 main.go:141] libmachine: Parsing certificate...
	I1001 12:27:41.404490    4137 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/cert.pem
	I1001 12:27:41.404534    4137 main.go:141] libmachine: Decoding PEM data...
	I1001 12:27:41.404550    4137 main.go:141] libmachine: Parsing certificate...
	I1001 12:27:41.405051    4137 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19736-1073/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 12:27:41.591236    4137 main.go:141] libmachine: Creating SSH key...
	I1001 12:27:41.651950    4137 main.go:141] libmachine: Creating Disk image...
	I1001 12:27:41.651956    4137 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 12:27:41.652142    4137 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/docker-flags-780000/disk.qcow2.raw /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/docker-flags-780000/disk.qcow2
	I1001 12:27:41.661398    4137 main.go:141] libmachine: STDOUT: 
	I1001 12:27:41.661417    4137 main.go:141] libmachine: STDERR: 
	I1001 12:27:41.661483    4137 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/docker-flags-780000/disk.qcow2 +20000M
	I1001 12:27:41.669270    4137 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 12:27:41.669285    4137 main.go:141] libmachine: STDERR: 
	I1001 12:27:41.669295    4137 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/docker-flags-780000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/docker-flags-780000/disk.qcow2
	I1001 12:27:41.669299    4137 main.go:141] libmachine: Starting QEMU VM...
	I1001 12:27:41.669314    4137 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:27:41.669347    4137 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/docker-flags-780000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/docker-flags-780000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/docker-flags-780000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:69:c0:f6:e3:7b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/docker-flags-780000/disk.qcow2
	I1001 12:27:41.670951    4137 main.go:141] libmachine: STDOUT: 
	I1001 12:27:41.670964    4137 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:27:41.670977    4137 client.go:171] duration metric: took 266.78475ms to LocalClient.Create
	I1001 12:27:43.673103    4137 start.go:128] duration metric: took 2.332932792s to createHost
	I1001 12:27:43.673188    4137 start.go:83] releasing machines lock for "docker-flags-780000", held for 2.333408458s
	W1001 12:27:43.673528    4137 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-780000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-780000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:27:43.686232    4137 out.go:201] 
	W1001 12:27:43.700442    4137 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 12:27:43.700492    4137 out.go:270] * 
	* 
	W1001 12:27:43.702445    4137 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 12:27:43.710145    4137 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-780000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-780000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-780000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (81.176ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-780000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-780000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-780000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-780000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-780000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-780000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-780000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-780000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (45.51325ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-780000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-780000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-780000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-780000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-780000\"\n"
panic.go:629: *** TestDockerFlags FAILED at 2024-10-01 12:27:43.853953 -0700 PDT m=+2494.695781251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-780000 -n docker-flags-780000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-780000 -n docker-flags-780000: exit status 7 (29.579625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-780000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-780000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-780000
--- FAIL: TestDockerFlags (10.11s)

                                                
                                    
x
+
TestForceSystemdFlag (10.28s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-155000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-155000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.094143458s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-155000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-155000" primary control-plane node in "force-systemd-flag-155000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-155000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:27:28.543228    4116 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:27:28.543375    4116 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:27:28.543378    4116 out.go:358] Setting ErrFile to fd 2...
	I1001 12:27:28.543381    4116 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:27:28.543513    4116 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:27:28.544579    4116 out.go:352] Setting JSON to false
	I1001 12:27:28.560528    4116 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3413,"bootTime":1727807435,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1001 12:27:28.560596    4116 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 12:27:28.568516    4116 out.go:177] * [force-systemd-flag-155000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 12:27:28.587553    4116 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 12:27:28.587577    4116 notify.go:220] Checking for updates...
	I1001 12:27:28.596490    4116 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	I1001 12:27:28.600529    4116 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 12:27:28.603454    4116 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 12:27:28.606511    4116 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	I1001 12:27:28.609525    4116 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 12:27:28.611460    4116 config.go:182] Loaded profile config "force-systemd-env-777000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:27:28.611541    4116 config.go:182] Loaded profile config "multinode-301000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:27:28.611607    4116 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 12:27:28.615488    4116 out.go:177] * Using the qemu2 driver based on user configuration
	I1001 12:27:28.622354    4116 start.go:297] selected driver: qemu2
	I1001 12:27:28.622361    4116 start.go:901] validating driver "qemu2" against <nil>
	I1001 12:27:28.622367    4116 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 12:27:28.624757    4116 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 12:27:28.627447    4116 out.go:177] * Automatically selected the socket_vmnet network
	I1001 12:27:28.630574    4116 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1001 12:27:28.630588    4116 cni.go:84] Creating CNI manager for ""
	I1001 12:27:28.630620    4116 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 12:27:28.630628    4116 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1001 12:27:28.630666    4116 start.go:340] cluster config:
	{Name:force-systemd-flag-155000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-155000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 12:27:28.635078    4116 iso.go:125] acquiring lock: {Name:mk749d3a5db31c259cbd6465e91cf5073e7cc750 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:27:28.642452    4116 out.go:177] * Starting "force-systemd-flag-155000" primary control-plane node in "force-systemd-flag-155000" cluster
	I1001 12:27:28.646521    4116 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 12:27:28.646538    4116 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1001 12:27:28.646548    4116 cache.go:56] Caching tarball of preloaded images
	I1001 12:27:28.646607    4116 preload.go:172] Found /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 12:27:28.646613    4116 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1001 12:27:28.646675    4116 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/force-systemd-flag-155000/config.json ...
	I1001 12:27:28.646687    4116 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/force-systemd-flag-155000/config.json: {Name:mk6625f52f2755f82070178092ccd502c08bab8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 12:27:28.646957    4116 start.go:360] acquireMachinesLock for force-systemd-flag-155000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:27:28.646999    4116 start.go:364] duration metric: took 34.083µs to acquireMachinesLock for "force-systemd-flag-155000"
	I1001 12:27:28.647013    4116 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-155000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-155000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 12:27:28.647044    4116 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 12:27:28.655470    4116 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1001 12:27:28.675150    4116 start.go:159] libmachine.API.Create for "force-systemd-flag-155000" (driver="qemu2")
	I1001 12:27:28.675182    4116 client.go:168] LocalClient.Create starting
	I1001 12:27:28.675254    4116 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem
	I1001 12:27:28.675287    4116 main.go:141] libmachine: Decoding PEM data...
	I1001 12:27:28.675297    4116 main.go:141] libmachine: Parsing certificate...
	I1001 12:27:28.675343    4116 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/cert.pem
	I1001 12:27:28.675367    4116 main.go:141] libmachine: Decoding PEM data...
	I1001 12:27:28.675378    4116 main.go:141] libmachine: Parsing certificate...
	I1001 12:27:28.675811    4116 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19736-1073/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 12:27:28.836743    4116 main.go:141] libmachine: Creating SSH key...
	I1001 12:27:29.154640    4116 main.go:141] libmachine: Creating Disk image...
	I1001 12:27:29.154654    4116 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 12:27:29.154896    4116 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/force-systemd-flag-155000/disk.qcow2.raw /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/force-systemd-flag-155000/disk.qcow2
	I1001 12:27:29.164693    4116 main.go:141] libmachine: STDOUT: 
	I1001 12:27:29.164709    4116 main.go:141] libmachine: STDERR: 
	I1001 12:27:29.164778    4116 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/force-systemd-flag-155000/disk.qcow2 +20000M
	I1001 12:27:29.172631    4116 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 12:27:29.172646    4116 main.go:141] libmachine: STDERR: 
	I1001 12:27:29.172665    4116 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/force-systemd-flag-155000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/force-systemd-flag-155000/disk.qcow2
	I1001 12:27:29.172675    4116 main.go:141] libmachine: Starting QEMU VM...
	I1001 12:27:29.172689    4116 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:27:29.172713    4116 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/force-systemd-flag-155000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/force-systemd-flag-155000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/force-systemd-flag-155000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:a4:e8:53:07:1d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/force-systemd-flag-155000/disk.qcow2
	I1001 12:27:29.174349    4116 main.go:141] libmachine: STDOUT: 
	I1001 12:27:29.174360    4116 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:27:29.174381    4116 client.go:171] duration metric: took 499.20375ms to LocalClient.Create
	I1001 12:27:31.176585    4116 start.go:128] duration metric: took 2.529545209s to createHost
	I1001 12:27:31.176730    4116 start.go:83] releasing machines lock for "force-systemd-flag-155000", held for 2.529712125s
	W1001 12:27:31.176794    4116 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:27:31.211846    4116 out.go:177] * Deleting "force-systemd-flag-155000" in qemu2 ...
	W1001 12:27:31.241730    4116 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:27:31.241749    4116 start.go:729] Will try again in 5 seconds ...
	I1001 12:27:36.243859    4116 start.go:360] acquireMachinesLock for force-systemd-flag-155000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:27:36.244259    4116 start.go:364] duration metric: took 262.584µs to acquireMachinesLock for "force-systemd-flag-155000"
	I1001 12:27:36.244361    4116 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-155000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-155000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 12:27:36.244659    4116 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 12:27:36.252965    4116 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1001 12:27:36.295947    4116 start.go:159] libmachine.API.Create for "force-systemd-flag-155000" (driver="qemu2")
	I1001 12:27:36.295992    4116 client.go:168] LocalClient.Create starting
	I1001 12:27:36.296110    4116 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem
	I1001 12:27:36.296172    4116 main.go:141] libmachine: Decoding PEM data...
	I1001 12:27:36.296189    4116 main.go:141] libmachine: Parsing certificate...
	I1001 12:27:36.296241    4116 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/cert.pem
	I1001 12:27:36.296278    4116 main.go:141] libmachine: Decoding PEM data...
	I1001 12:27:36.296292    4116 main.go:141] libmachine: Parsing certificate...
	I1001 12:27:36.296853    4116 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19736-1073/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 12:27:36.472072    4116 main.go:141] libmachine: Creating SSH key...
	I1001 12:27:36.531651    4116 main.go:141] libmachine: Creating Disk image...
	I1001 12:27:36.531661    4116 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 12:27:36.531844    4116 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/force-systemd-flag-155000/disk.qcow2.raw /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/force-systemd-flag-155000/disk.qcow2
	I1001 12:27:36.541127    4116 main.go:141] libmachine: STDOUT: 
	I1001 12:27:36.541140    4116 main.go:141] libmachine: STDERR: 
	I1001 12:27:36.541200    4116 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/force-systemd-flag-155000/disk.qcow2 +20000M
	I1001 12:27:36.549138    4116 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 12:27:36.549164    4116 main.go:141] libmachine: STDERR: 
	I1001 12:27:36.549183    4116 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/force-systemd-flag-155000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/force-systemd-flag-155000/disk.qcow2
	I1001 12:27:36.549189    4116 main.go:141] libmachine: Starting QEMU VM...
	I1001 12:27:36.549195    4116 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:27:36.549227    4116 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/force-systemd-flag-155000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/force-systemd-flag-155000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/force-systemd-flag-155000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:3f:8b:55:47:2a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/force-systemd-flag-155000/disk.qcow2
	I1001 12:27:36.550886    4116 main.go:141] libmachine: STDOUT: 
	I1001 12:27:36.550898    4116 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:27:36.550912    4116 client.go:171] duration metric: took 254.919333ms to LocalClient.Create
	I1001 12:27:38.552318    4116 start.go:128] duration metric: took 2.307610916s to createHost
	I1001 12:27:38.552404    4116 start.go:83] releasing machines lock for "force-systemd-flag-155000", held for 2.308157625s
	W1001 12:27:38.552762    4116 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-155000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-155000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:27:38.574628    4116 out.go:201] 
	W1001 12:27:38.581710    4116 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 12:27:38.581742    4116 out.go:270] * 
	* 
	W1001 12:27:38.584462    4116 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 12:27:38.595465    4116 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-155000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-155000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-155000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (76.257209ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-155000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-155000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-155000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-10-01 12:27:38.688615 -0700 PDT m=+2489.530336501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-155000 -n force-systemd-flag-155000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-155000 -n force-systemd-flag-155000: exit status 7 (33.564958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-155000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-155000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-155000
--- FAIL: TestForceSystemdFlag (10.28s)

                                                
                                    
x
+
TestForceSystemdEnv (11s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-777000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
I1001 12:27:23.226023    1595 install.go:79] stdout: 
W1001 12:27:23.226148    1595 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate1966852818/001/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate1966852818/001/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I1001 12:27:23.226163    1595 install.go:99] testing: [sudo -n chown root:wheel /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate1966852818/001/docker-machine-driver-hyperkit]
I1001 12:27:23.235693    1595 install.go:106] running: [sudo chown root:wheel /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate1966852818/001/docker-machine-driver-hyperkit]
I1001 12:27:23.244406    1595 install.go:99] testing: [sudo -n chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate1966852818/001/docker-machine-driver-hyperkit]
I1001 12:27:23.252769    1595 install.go:106] running: [sudo chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate1966852818/001/docker-machine-driver-hyperkit]
I1001 12:27:23.268954    1595 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1001 12:27:23.269071    1595 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-older-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
I1001 12:27:25.037460    1595 install.go:137] /Users/jenkins/workspace/testdata/hyperkit-driver-older-version/docker-machine-driver-hyperkit version is 1.2.0
W1001 12:27:25.037484    1595 install.go:62] docker-machine-driver-hyperkit: docker-machine-driver-hyperkit is version 1.2.0, want 1.11.0
W1001 12:27:25.037549    1595 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I1001 12:27:25.037580    1595 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate1966852818/002/docker-machine-driver-hyperkit
I1001 12:27:25.428184    1595 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate1966852818/002/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x106c52d40 0x106c52d40 0x106c52d40 0x106c52d40 0x106c52d40 0x106c52d40 0x106c52d40] Decompressors:map[bz2:0x14000739aa0 gz:0x14000739aa8 tar:0x14000739a50 tar.bz2:0x14000739a60 tar.gz:0x14000739a70 tar.xz:0x14000739a80 tar.zst:0x14000739a90 tbz2:0x14000739a60 tgz:0x14000739a70 txz:0x14000739a80 tzst:0x14000739a90 xz:0x14000739ab0 zip:0x14000739ac0 zst:0x14000739ab8] Getters:map[file:0x1400199b650 http:0x140007130e0 https:0x14000713130] Dir:false ProgressListener:<nil> Insecure:false DisableSy
mlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I1001 12:27:25.428273    1595 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate1966852818/002/docker-machine-driver-hyperkit
I1001 12:27:28.477204    1595 install.go:79] stdout: 
W1001 12:27:28.477404    1595 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate1966852818/002/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate1966852818/002/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I1001 12:27:28.477428    1595 install.go:99] testing: [sudo -n chown root:wheel /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate1966852818/002/docker-machine-driver-hyperkit]
I1001 12:27:28.488871    1595 install.go:106] running: [sudo chown root:wheel /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate1966852818/002/docker-machine-driver-hyperkit]
I1001 12:27:28.498285    1595 install.go:99] testing: [sudo -n chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate1966852818/002/docker-machine-driver-hyperkit]
I1001 12:27:28.506304    1595 install.go:106] running: [sudo chmod u+s /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate1966852818/002/docker-machine-driver-hyperkit]
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-777000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.804461667s)

                                                
                                                
-- stdout --
	* [force-systemd-env-777000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-777000" primary control-plane node in "force-systemd-env-777000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-777000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:27:22.886299    4084 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:27:22.886423    4084 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:27:22.886426    4084 out.go:358] Setting ErrFile to fd 2...
	I1001 12:27:22.886429    4084 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:27:22.886818    4084 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:27:22.888205    4084 out.go:352] Setting JSON to false
	I1001 12:27:22.904427    4084 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3407,"bootTime":1727807435,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1001 12:27:22.904577    4084 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 12:27:22.911923    4084 out.go:177] * [force-systemd-env-777000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 12:27:22.920077    4084 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 12:27:22.920119    4084 notify.go:220] Checking for updates...
	I1001 12:27:22.927999    4084 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	I1001 12:27:22.931086    4084 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 12:27:22.934006    4084 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 12:27:22.937044    4084 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	I1001 12:27:22.940057    4084 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I1001 12:27:22.943308    4084 config.go:182] Loaded profile config "multinode-301000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:27:22.943357    4084 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 12:27:22.948072    4084 out.go:177] * Using the qemu2 driver based on user configuration
	I1001 12:27:22.954041    4084 start.go:297] selected driver: qemu2
	I1001 12:27:22.954047    4084 start.go:901] validating driver "qemu2" against <nil>
	I1001 12:27:22.954052    4084 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 12:27:22.956202    4084 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 12:27:22.960022    4084 out.go:177] * Automatically selected the socket_vmnet network
	I1001 12:27:22.964154    4084 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1001 12:27:22.964169    4084 cni.go:84] Creating CNI manager for ""
	I1001 12:27:22.964209    4084 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 12:27:22.964218    4084 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1001 12:27:22.964248    4084 start.go:340] cluster config:
	{Name:force-systemd-env-777000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-777000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticI
P: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 12:27:22.967877    4084 iso.go:125] acquiring lock: {Name:mk749d3a5db31c259cbd6465e91cf5073e7cc750 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:27:22.975054    4084 out.go:177] * Starting "force-systemd-env-777000" primary control-plane node in "force-systemd-env-777000" cluster
	I1001 12:27:22.978829    4084 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 12:27:22.978847    4084 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1001 12:27:22.978856    4084 cache.go:56] Caching tarball of preloaded images
	I1001 12:27:22.978926    4084 preload.go:172] Found /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 12:27:22.978932    4084 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1001 12:27:22.978992    4084 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/force-systemd-env-777000/config.json ...
	I1001 12:27:22.979003    4084 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/force-systemd-env-777000/config.json: {Name:mk57b13262a24fb1b0f76bb18900a755b7b51d2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 12:27:22.979230    4084 start.go:360] acquireMachinesLock for force-systemd-env-777000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:27:22.979271    4084 start.go:364] duration metric: took 27.792µs to acquireMachinesLock for "force-systemd-env-777000"
	I1001 12:27:22.979282    4084 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-777000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-777000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 12:27:22.979316    4084 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 12:27:22.982071    4084 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1001 12:27:22.998623    4084 start.go:159] libmachine.API.Create for "force-systemd-env-777000" (driver="qemu2")
	I1001 12:27:22.998657    4084 client.go:168] LocalClient.Create starting
	I1001 12:27:22.998720    4084 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem
	I1001 12:27:22.998749    4084 main.go:141] libmachine: Decoding PEM data...
	I1001 12:27:22.998763    4084 main.go:141] libmachine: Parsing certificate...
	I1001 12:27:22.998804    4084 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/cert.pem
	I1001 12:27:22.998830    4084 main.go:141] libmachine: Decoding PEM data...
	I1001 12:27:22.998840    4084 main.go:141] libmachine: Parsing certificate...
	I1001 12:27:22.999218    4084 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19736-1073/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 12:27:23.160734    4084 main.go:141] libmachine: Creating SSH key...
	I1001 12:27:23.311457    4084 main.go:141] libmachine: Creating Disk image...
	I1001 12:27:23.311469    4084 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 12:27:23.311677    4084 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/force-systemd-env-777000/disk.qcow2.raw /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/force-systemd-env-777000/disk.qcow2
	I1001 12:27:23.321670    4084 main.go:141] libmachine: STDOUT: 
	I1001 12:27:23.321699    4084 main.go:141] libmachine: STDERR: 
	I1001 12:27:23.321769    4084 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/force-systemd-env-777000/disk.qcow2 +20000M
	I1001 12:27:23.330879    4084 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 12:27:23.330899    4084 main.go:141] libmachine: STDERR: 
	I1001 12:27:23.330918    4084 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/force-systemd-env-777000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/force-systemd-env-777000/disk.qcow2
	I1001 12:27:23.330923    4084 main.go:141] libmachine: Starting QEMU VM...
	I1001 12:27:23.330938    4084 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:27:23.330965    4084 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/force-systemd-env-777000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/force-systemd-env-777000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/force-systemd-env-777000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:1a:a9:0e:c9:bb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/force-systemd-env-777000/disk.qcow2
	I1001 12:27:23.332905    4084 main.go:141] libmachine: STDOUT: 
	I1001 12:27:23.332924    4084 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:27:23.332954    4084 client.go:171] duration metric: took 334.298917ms to LocalClient.Create
	I1001 12:27:25.334836    4084 start.go:128] duration metric: took 2.355536125s to createHost
	I1001 12:27:25.334900    4084 start.go:83] releasing machines lock for "force-systemd-env-777000", held for 2.355667417s
	W1001 12:27:25.334975    4084 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:27:25.353121    4084 out.go:177] * Deleting "force-systemd-env-777000" in qemu2 ...
	W1001 12:27:25.385861    4084 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:27:25.385892    4084 start.go:729] Will try again in 5 seconds ...
	I1001 12:27:30.387999    4084 start.go:360] acquireMachinesLock for force-systemd-env-777000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:27:31.176918    4084 start.go:364] duration metric: took 788.802833ms to acquireMachinesLock for "force-systemd-env-777000"
	I1001 12:27:31.177014    4084 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-777000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-777000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 12:27:31.177319    4084 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 12:27:31.194901    4084 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1001 12:27:31.246457    4084 start.go:159] libmachine.API.Create for "force-systemd-env-777000" (driver="qemu2")
	I1001 12:27:31.246657    4084 client.go:168] LocalClient.Create starting
	I1001 12:27:31.246780    4084 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem
	I1001 12:27:31.246851    4084 main.go:141] libmachine: Decoding PEM data...
	I1001 12:27:31.246867    4084 main.go:141] libmachine: Parsing certificate...
	I1001 12:27:31.246939    4084 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/cert.pem
	I1001 12:27:31.246985    4084 main.go:141] libmachine: Decoding PEM data...
	I1001 12:27:31.246996    4084 main.go:141] libmachine: Parsing certificate...
	I1001 12:27:31.247498    4084 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19736-1073/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 12:27:31.447915    4084 main.go:141] libmachine: Creating SSH key...
	I1001 12:27:31.582685    4084 main.go:141] libmachine: Creating Disk image...
	I1001 12:27:31.582691    4084 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 12:27:31.582882    4084 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/force-systemd-env-777000/disk.qcow2.raw /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/force-systemd-env-777000/disk.qcow2
	I1001 12:27:31.592275    4084 main.go:141] libmachine: STDOUT: 
	I1001 12:27:31.592291    4084 main.go:141] libmachine: STDERR: 
	I1001 12:27:31.592344    4084 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/force-systemd-env-777000/disk.qcow2 +20000M
	I1001 12:27:31.600170    4084 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 12:27:31.600187    4084 main.go:141] libmachine: STDERR: 
	I1001 12:27:31.600198    4084 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/force-systemd-env-777000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/force-systemd-env-777000/disk.qcow2
	I1001 12:27:31.600203    4084 main.go:141] libmachine: Starting QEMU VM...
	I1001 12:27:31.600213    4084 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:27:31.600239    4084 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/force-systemd-env-777000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/force-systemd-env-777000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/force-systemd-env-777000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:fe:c8:53:01:54 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/force-systemd-env-777000/disk.qcow2
	I1001 12:27:31.601894    4084 main.go:141] libmachine: STDOUT: 
	I1001 12:27:31.601916    4084 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:27:31.601931    4084 client.go:171] duration metric: took 355.274417ms to LocalClient.Create
	I1001 12:27:33.604077    4084 start.go:128] duration metric: took 2.426784334s to createHost
	I1001 12:27:33.604126    4084 start.go:83] releasing machines lock for "force-systemd-env-777000", held for 2.42721875s
	W1001 12:27:33.604424    4084 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-777000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-777000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:27:33.630956    4084 out.go:201] 
	W1001 12:27:33.635022    4084 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 12:27:33.635056    4084 out.go:270] * 
	* 
	W1001 12:27:33.637716    4084 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 12:27:33.647813    4084 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-777000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-777000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-777000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (79.430959ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-777000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-777000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-777000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-10-01 12:27:33.745009 -0700 PDT m=+2484.586628793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-777000 -n force-systemd-env-777000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-777000 -n force-systemd-env-777000: exit status 7 (34.865917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-777000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-777000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-777000
--- FAIL: TestForceSystemdEnv (11.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (32.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-755000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-755000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-66n2d" [1bb2f1ce-f87f-4e23-9f90-d6f2a02dcbb2] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
E1001 12:06:03.345148    1595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/addons-075000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "hello-node-connect-65d86f57f4-66n2d" [1bb2f1ce-f87f-4e23-9f90-d6f2a02dcbb2] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 13.010850417s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.105.4:31265
functional_test.go:1661: error fetching http://192.168.105.4:31265: Get "http://192.168.105.4:31265": dial tcp 192.168.105.4:31265: connect: connection refused
I1001 12:06:16.096512    1595 retry.go:31] will retry after 1.156446931s: Get "http://192.168.105.4:31265": dial tcp 192.168.105.4:31265: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31265: Get "http://192.168.105.4:31265": dial tcp 192.168.105.4:31265: connect: connection refused
I1001 12:06:17.256061    1595 retry.go:31] will retry after 1.366120769s: Get "http://192.168.105.4:31265": dial tcp 192.168.105.4:31265: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31265: Get "http://192.168.105.4:31265": dial tcp 192.168.105.4:31265: connect: connection refused
I1001 12:06:18.624703    1595 retry.go:31] will retry after 1.198125859s: Get "http://192.168.105.4:31265": dial tcp 192.168.105.4:31265: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31265: Get "http://192.168.105.4:31265": dial tcp 192.168.105.4:31265: connect: connection refused
I1001 12:06:19.827153    1595 retry.go:31] will retry after 4.483828844s: Get "http://192.168.105.4:31265": dial tcp 192.168.105.4:31265: connect: connection refused
E1001 12:06:23.828848    1595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/addons-075000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1661: error fetching http://192.168.105.4:31265: Get "http://192.168.105.4:31265": dial tcp 192.168.105.4:31265: connect: connection refused
I1001 12:06:24.313582    1595 retry.go:31] will retry after 3.097540824s: Get "http://192.168.105.4:31265": dial tcp 192.168.105.4:31265: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31265: Get "http://192.168.105.4:31265": dial tcp 192.168.105.4:31265: connect: connection refused
I1001 12:06:27.413686    1595 retry.go:31] will retry after 7.183805477s: Get "http://192.168.105.4:31265": dial tcp 192.168.105.4:31265: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31265: Get "http://192.168.105.4:31265": dial tcp 192.168.105.4:31265: connect: connection refused
functional_test.go:1681: failed to fetch http://192.168.105.4:31265: Get "http://192.168.105.4:31265": dial tcp 192.168.105.4:31265: connect: connection refused
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-755000 describe po hello-node-connect
functional_test.go:1606: hello-node pod describe:
Name:             hello-node-connect-65d86f57f4-66n2d
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-755000/192.168.105.4
Start Time:       Tue, 01 Oct 2024 12:06:02 -0700
Labels:           app=hello-node-connect
pod-template-hash=65d86f57f4
Annotations:      <none>
Status:           Running
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-65d86f57f4
Containers:
echoserver-arm:
Container ID:   docker://90e634beaea94ec3a41fcd2351086df92c65fdc89d14ead1e252a9c44f8c7772
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Terminated
Reason:       Error
Exit Code:    1
Started:      Tue, 01 Oct 2024 12:06:25 -0700
Finished:     Tue, 01 Oct 2024 12:06:25 -0700
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Tue, 01 Oct 2024 12:06:09 -0700
Finished:     Tue, 01 Oct 2024 12:06:09 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gntrj (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-gntrj:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age               From               Message
----     ------     ----              ----               -------
Normal   Scheduled  31s               default-scheduler  Successfully assigned default/hello-node-connect-65d86f57f4-66n2d to functional-755000
Normal   Pulling    31s               kubelet            Pulling image "registry.k8s.io/echoserver-arm:1.8"
Normal   Pulled     25s               kubelet            Successfully pulled image "registry.k8s.io/echoserver-arm:1.8" in 5.828s (5.828s including waiting). Image size: 84957542 bytes.
Normal   Created    9s (x3 over 25s)  kubelet            Created container echoserver-arm
Normal   Started    9s (x3 over 25s)  kubelet            Started container echoserver-arm
Normal   Pulled     9s (x2 over 25s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Warning  BackOff    8s (x3 over 24s)  kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-65d86f57f4-66n2d_default(1bb2f1ce-f87f-4e23-9f90-d6f2a02dcbb2)

                                                
                                                
functional_test.go:1608: (dbg) Run:  kubectl --context functional-755000 logs -l app=hello-node-connect
functional_test.go:1612: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1614: (dbg) Run:  kubectl --context functional-755000 describe svc hello-node-connect
functional_test.go:1618: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.99.231.97
IPs:                      10.99.231.97
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31265/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-755000 -n functional-755000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                                      Args                                                      |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| pause   | nospam-818000 --log_dir                                                                                        | nospam-818000     | jenkins | v1.34.0 | 01 Oct 24 12:01 PDT | 01 Oct 24 12:01 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-818000                                                 |                   |         |         |                     |                     |
	|         | pause                                                                                                          |                   |         |         |                     |                     |
	| pause   | nospam-818000 --log_dir                                                                                        | nospam-818000     | jenkins | v1.34.0 | 01 Oct 24 12:01 PDT | 01 Oct 24 12:01 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-818000                                                 |                   |         |         |                     |                     |
	|         | pause                                                                                                          |                   |         |         |                     |                     |
	| unpause | nospam-818000 --log_dir                                                                                        | nospam-818000     | jenkins | v1.34.0 | 01 Oct 24 12:01 PDT | 01 Oct 24 12:01 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-818000                                                 |                   |         |         |                     |                     |
	|         | unpause                                                                                                        |                   |         |         |                     |                     |
	| unpause | nospam-818000 --log_dir                                                                                        | nospam-818000     | jenkins | v1.34.0 | 01 Oct 24 12:01 PDT | 01 Oct 24 12:01 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-818000                                                 |                   |         |         |                     |                     |
	|         | unpause                                                                                                        |                   |         |         |                     |                     |
	| unpause | nospam-818000 --log_dir                                                                                        | nospam-818000     | jenkins | v1.34.0 | 01 Oct 24 12:01 PDT | 01 Oct 24 12:01 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-818000                                                 |                   |         |         |                     |                     |
	|         | unpause                                                                                                        |                   |         |         |                     |                     |
	| stop    | nospam-818000 --log_dir                                                                                        | nospam-818000     | jenkins | v1.34.0 | 01 Oct 24 12:01 PDT | 01 Oct 24 12:02 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-818000                                                 |                   |         |         |                     |                     |
	|         | stop                                                                                                           |                   |         |         |                     |                     |
	| stop    | nospam-818000 --log_dir                                                                                        | nospam-818000     | jenkins | v1.34.0 | 01 Oct 24 12:02 PDT | 01 Oct 24 12:02 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-818000                                                 |                   |         |         |                     |                     |
	|         | stop                                                                                                           |                   |         |         |                     |                     |
	| cp      | functional-755000 cp                                                                                           | functional-755000 | jenkins | v1.34.0 | 01 Oct 24 12:05 PDT | 01 Oct 24 12:05 PDT |
	|         | testdata/cp-test.txt                                                                                           |                   |         |         |                     |                     |
	|         | /tmp/does/not/exist/cp-test.txt                                                                                |                   |         |         |                     |                     |
	| ssh     | functional-755000 ssh cat                                                                                      | functional-755000 | jenkins | v1.34.0 | 01 Oct 24 12:05 PDT | 01 Oct 24 12:05 PDT |
	|         | /etc/hostname                                                                                                  |                   |         |         |                     |                     |
	| ssh     | functional-755000 ssh -n                                                                                       | functional-755000 | jenkins | v1.34.0 | 01 Oct 24 12:05 PDT | 01 Oct 24 12:05 PDT |
	|         | functional-755000 sudo cat                                                                                     |                   |         |         |                     |                     |
	|         | /tmp/does/not/exist/cp-test.txt                                                                                |                   |         |         |                     |                     |
	| tunnel  | functional-755000 tunnel                                                                                       | functional-755000 | jenkins | v1.34.0 | 01 Oct 24 12:05 PDT |                     |
	|         | --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	| tunnel  | functional-755000 tunnel                                                                                       | functional-755000 | jenkins | v1.34.0 | 01 Oct 24 12:05 PDT |                     |
	|         | --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	| tunnel  | functional-755000 tunnel                                                                                       | functional-755000 | jenkins | v1.34.0 | 01 Oct 24 12:05 PDT |                     |
	|         | --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	| addons  | functional-755000 addons list                                                                                  | functional-755000 | jenkins | v1.34.0 | 01 Oct 24 12:06 PDT | 01 Oct 24 12:06 PDT |
	| addons  | functional-755000 addons list                                                                                  | functional-755000 | jenkins | v1.34.0 | 01 Oct 24 12:06 PDT | 01 Oct 24 12:06 PDT |
	|         | -o json                                                                                                        |                   |         |         |                     |                     |
	| service | functional-755000 service                                                                                      | functional-755000 | jenkins | v1.34.0 | 01 Oct 24 12:06 PDT | 01 Oct 24 12:06 PDT |
	|         | hello-node-connect --url                                                                                       |                   |         |         |                     |                     |
	| service | functional-755000 service list                                                                                 | functional-755000 | jenkins | v1.34.0 | 01 Oct 24 12:06 PDT | 01 Oct 24 12:06 PDT |
	| service | functional-755000 service list                                                                                 | functional-755000 | jenkins | v1.34.0 | 01 Oct 24 12:06 PDT | 01 Oct 24 12:06 PDT |
	|         | -o json                                                                                                        |                   |         |         |                     |                     |
	| service | functional-755000 service                                                                                      | functional-755000 | jenkins | v1.34.0 | 01 Oct 24 12:06 PDT | 01 Oct 24 12:06 PDT |
	|         | --namespace=default --https                                                                                    |                   |         |         |                     |                     |
	|         | --url hello-node                                                                                               |                   |         |         |                     |                     |
	| service | functional-755000                                                                                              | functional-755000 | jenkins | v1.34.0 | 01 Oct 24 12:06 PDT | 01 Oct 24 12:06 PDT |
	|         | service hello-node --url                                                                                       |                   |         |         |                     |                     |
	|         | --format={{.IP}}                                                                                               |                   |         |         |                     |                     |
	| service | functional-755000 service                                                                                      | functional-755000 | jenkins | v1.34.0 | 01 Oct 24 12:06 PDT | 01 Oct 24 12:06 PDT |
	|         | hello-node --url                                                                                               |                   |         |         |                     |                     |
	| mount   | -p functional-755000                                                                                           | functional-755000 | jenkins | v1.34.0 | 01 Oct 24 12:06 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port822019796/001:/mount-9p |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                         |                   |         |         |                     |                     |
	| ssh     | functional-755000 ssh findmnt                                                                                  | functional-755000 | jenkins | v1.34.0 | 01 Oct 24 12:06 PDT | 01 Oct 24 12:06 PDT |
	|         | -T /mount-9p | grep 9p                                                                                         |                   |         |         |                     |                     |
	| ssh     | functional-755000 ssh -- ls                                                                                    | functional-755000 | jenkins | v1.34.0 | 01 Oct 24 12:06 PDT | 01 Oct 24 12:06 PDT |
	|         | -la /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| ssh     | functional-755000 ssh cat                                                                                      | functional-755000 | jenkins | v1.34.0 | 01 Oct 24 12:06 PDT | 01 Oct 24 12:06 PDT |
	|         | /mount-9p/test-1727809585578912000                                                                             |                   |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 12:05:08
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 12:05:08.076153    2480 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:05:08.076355    2480 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:05:08.076359    2480 out.go:358] Setting ErrFile to fd 2...
	I1001 12:05:08.076361    2480 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:05:08.076542    2480 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:05:08.078095    2480 out.go:352] Setting JSON to false
	I1001 12:05:08.099925    2480 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2073,"bootTime":1727807435,"procs":452,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1001 12:05:08.100000    2480 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 12:05:08.105130    2480 out.go:177] * [functional-755000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 12:05:08.114008    2480 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 12:05:08.114037    2480 notify.go:220] Checking for updates...
	I1001 12:05:08.123976    2480 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	I1001 12:05:08.128027    2480 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 12:05:08.130956    2480 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 12:05:08.134034    2480 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	I1001 12:05:08.136976    2480 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 12:05:08.140232    2480 config.go:182] Loaded profile config "functional-755000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:05:08.140291    2480 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 12:05:08.146970    2480 out.go:177] * Using the qemu2 driver based on existing profile
	I1001 12:05:08.154005    2480 start.go:297] selected driver: qemu2
	I1001 12:05:08.154008    2480 start.go:901] validating driver "qemu2" against &{Name:functional-755000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:functional-755000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 12:05:08.154053    2480 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 12:05:08.156246    2480 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 12:05:08.156272    2480 cni.go:84] Creating CNI manager for ""
	I1001 12:05:08.156302    2480 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 12:05:08.156352    2480 start.go:340] cluster config:
	{Name:functional-755000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-755000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 12:05:08.159725    2480 iso.go:125] acquiring lock: {Name:mk749d3a5db31c259cbd6465e91cf5073e7cc750 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:05:08.166969    2480 out.go:177] * Starting "functional-755000" primary control-plane node in "functional-755000" cluster
	I1001 12:05:08.171006    2480 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 12:05:08.171021    2480 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1001 12:05:08.171033    2480 cache.go:56] Caching tarball of preloaded images
	I1001 12:05:08.171106    2480 preload.go:172] Found /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 12:05:08.171110    2480 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1001 12:05:08.171176    2480 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/functional-755000/config.json ...
	I1001 12:05:08.171705    2480 start.go:360] acquireMachinesLock for functional-755000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:05:08.171736    2480 start.go:364] duration metric: took 26.417µs to acquireMachinesLock for "functional-755000"
	I1001 12:05:08.171742    2480 start.go:96] Skipping create...Using existing machine configuration
	I1001 12:05:08.171744    2480 fix.go:54] fixHost starting: 
	I1001 12:05:08.172310    2480 fix.go:112] recreateIfNeeded on functional-755000: state=Running err=<nil>
	W1001 12:05:08.172316    2480 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 12:05:08.177035    2480 out.go:177] * Updating the running qemu2 "functional-755000" VM ...
	I1001 12:05:08.184953    2480 machine.go:93] provisionDockerMachine start ...
	I1001 12:05:08.184988    2480 main.go:141] libmachine: Using SSH client type: native
	I1001 12:05:08.185101    2480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100799c00] 0x10079c440 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I1001 12:05:08.185104    2480 main.go:141] libmachine: About to run SSH command:
	hostname
	I1001 12:05:08.238701    2480 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-755000
	
	I1001 12:05:08.238713    2480 buildroot.go:166] provisioning hostname "functional-755000"
	I1001 12:05:08.238760    2480 main.go:141] libmachine: Using SSH client type: native
	I1001 12:05:08.238890    2480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100799c00] 0x10079c440 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I1001 12:05:08.238893    2480 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-755000 && echo "functional-755000" | sudo tee /etc/hostname
	I1001 12:05:08.294678    2480 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-755000
	
	I1001 12:05:08.294729    2480 main.go:141] libmachine: Using SSH client type: native
	I1001 12:05:08.294845    2480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100799c00] 0x10079c440 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I1001 12:05:08.294851    2480 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-755000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-755000/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-755000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 12:05:08.346933    2480 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 12:05:08.346941    2480 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19736-1073/.minikube CaCertPath:/Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19736-1073/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19736-1073/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19736-1073/.minikube}
	I1001 12:05:08.346948    2480 buildroot.go:174] setting up certificates
	I1001 12:05:08.346953    2480 provision.go:84] configureAuth start
	I1001 12:05:08.346959    2480 provision.go:143] copyHostCerts
	I1001 12:05:08.347025    2480 exec_runner.go:144] found /Users/jenkins/minikube-integration/19736-1073/.minikube/ca.pem, removing ...
	I1001 12:05:08.347030    2480 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19736-1073/.minikube/ca.pem
	I1001 12:05:08.347175    2480 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19736-1073/.minikube/ca.pem (1078 bytes)
	I1001 12:05:08.347342    2480 exec_runner.go:144] found /Users/jenkins/minikube-integration/19736-1073/.minikube/cert.pem, removing ...
	I1001 12:05:08.347344    2480 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19736-1073/.minikube/cert.pem
	I1001 12:05:08.347400    2480 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19736-1073/.minikube/cert.pem (1123 bytes)
	I1001 12:05:08.347517    2480 exec_runner.go:144] found /Users/jenkins/minikube-integration/19736-1073/.minikube/key.pem, removing ...
	I1001 12:05:08.347518    2480 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19736-1073/.minikube/key.pem
	I1001 12:05:08.347571    2480 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19736-1073/.minikube/key.pem (1675 bytes)
	I1001 12:05:08.347660    2480 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca-key.pem org=jenkins.functional-755000 san=[127.0.0.1 192.168.105.4 functional-755000 localhost minikube]
	I1001 12:05:08.399055    2480 provision.go:177] copyRemoteCerts
	I1001 12:05:08.399090    2480 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 12:05:08.399095    2480 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/functional-755000/id_rsa Username:docker}
	I1001 12:05:08.431731    2480 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1001 12:05:08.440495    2480 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1001 12:05:08.448802    2480 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1001 12:05:08.457022    2480 provision.go:87] duration metric: took 110.061667ms to configureAuth
	I1001 12:05:08.457028    2480 buildroot.go:189] setting minikube options for container-runtime
	I1001 12:05:08.457148    2480 config.go:182] Loaded profile config "functional-755000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:05:08.457182    2480 main.go:141] libmachine: Using SSH client type: native
	I1001 12:05:08.457276    2480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100799c00] 0x10079c440 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I1001 12:05:08.457280    2480 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1001 12:05:08.510378    2480 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1001 12:05:08.510384    2480 buildroot.go:70] root file system type: tmpfs
	I1001 12:05:08.510429    2480 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1001 12:05:08.510491    2480 main.go:141] libmachine: Using SSH client type: native
	I1001 12:05:08.510601    2480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100799c00] 0x10079c440 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I1001 12:05:08.510631    2480 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1001 12:05:08.567986    2480 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1001 12:05:08.568052    2480 main.go:141] libmachine: Using SSH client type: native
	I1001 12:05:08.568165    2480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100799c00] 0x10079c440 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I1001 12:05:08.568172    2480 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1001 12:05:08.621076    2480 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 12:05:08.621083    2480 machine.go:96] duration metric: took 436.129125ms to provisionDockerMachine
	I1001 12:05:08.621087    2480 start.go:293] postStartSetup for "functional-755000" (driver="qemu2")
	I1001 12:05:08.621093    2480 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 12:05:08.621149    2480 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 12:05:08.621156    2480 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/functional-755000/id_rsa Username:docker}
	I1001 12:05:08.649764    2480 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 12:05:08.651074    2480 info.go:137] Remote host: Buildroot 2023.02.9
	I1001 12:05:08.651078    2480 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19736-1073/.minikube/addons for local assets ...
	I1001 12:05:08.651146    2480 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19736-1073/.minikube/files for local assets ...
	I1001 12:05:08.651267    2480 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19736-1073/.minikube/files/etc/ssl/certs/15952.pem -> 15952.pem in /etc/ssl/certs
	I1001 12:05:08.651379    2480 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19736-1073/.minikube/files/etc/test/nested/copy/1595/hosts -> hosts in /etc/test/nested/copy/1595
	I1001 12:05:08.651411    2480 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1595
	I1001 12:05:08.654552    2480 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/files/etc/ssl/certs/15952.pem --> /etc/ssl/certs/15952.pem (1708 bytes)
	I1001 12:05:08.663168    2480 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/files/etc/test/nested/copy/1595/hosts --> /etc/test/nested/copy/1595/hosts (40 bytes)
	I1001 12:05:08.671292    2480 start.go:296] duration metric: took 50.194334ms for postStartSetup
	I1001 12:05:08.671303    2480 fix.go:56] duration metric: took 499.5625ms for fixHost
	I1001 12:05:08.671349    2480 main.go:141] libmachine: Using SSH client type: native
	I1001 12:05:08.671451    2480 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100799c00] 0x10079c440 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I1001 12:05:08.671454    2480 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 12:05:08.723622    2480 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727809508.709661778
	
	I1001 12:05:08.723627    2480 fix.go:216] guest clock: 1727809508.709661778
	I1001 12:05:08.723630    2480 fix.go:229] Guest: 2024-10-01 12:05:08.709661778 -0700 PDT Remote: 2024-10-01 12:05:08.671304 -0700 PDT m=+0.633704459 (delta=38.357778ms)
	I1001 12:05:08.723639    2480 fix.go:200] guest clock delta is within tolerance: 38.357778ms
	I1001 12:05:08.723641    2480 start.go:83] releasing machines lock for "functional-755000", held for 551.905958ms
	I1001 12:05:08.723930    2480 ssh_runner.go:195] Run: cat /version.json
	I1001 12:05:08.723935    2480 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/functional-755000/id_rsa Username:docker}
	I1001 12:05:08.723946    2480 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 12:05:08.723960    2480 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/functional-755000/id_rsa Username:docker}
	I1001 12:05:08.798929    2480 ssh_runner.go:195] Run: systemctl --version
	I1001 12:05:08.800982    2480 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 12:05:08.802727    2480 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 12:05:08.802753    2480 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 12:05:08.805817    2480 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1001 12:05:08.805823    2480 start.go:495] detecting cgroup driver to use...
	I1001 12:05:08.805882    2480 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 12:05:08.812135    2480 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1001 12:05:08.816595    2480 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1001 12:05:08.820626    2480 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1001 12:05:08.820649    2480 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1001 12:05:08.824064    2480 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1001 12:05:08.828047    2480 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1001 12:05:08.831874    2480 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1001 12:05:08.835505    2480 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 12:05:08.839186    2480 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1001 12:05:08.842819    2480 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1001 12:05:08.846394    2480 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1001 12:05:08.849993    2480 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 12:05:08.853327    2480 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 12:05:08.857215    2480 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 12:05:08.951773    2480 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1001 12:05:08.963493    2480 start.go:495] detecting cgroup driver to use...
	I1001 12:05:08.963564    2480 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1001 12:05:08.969725    2480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 12:05:08.976160    2480 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 12:05:08.982892    2480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 12:05:08.988368    2480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1001 12:05:08.993627    2480 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 12:05:09.000115    2480 ssh_runner.go:195] Run: which cri-dockerd
	I1001 12:05:09.001511    2480 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1001 12:05:09.005343    2480 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1001 12:05:09.011241    2480 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1001 12:05:09.105864    2480 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1001 12:05:09.199861    2480 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1001 12:05:09.199918    2480 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1001 12:05:09.206647    2480 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 12:05:09.297198    2480 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1001 12:05:21.697444    2480 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.400313583s)
	I1001 12:05:21.697527    2480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1001 12:05:21.703477    2480 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1001 12:05:21.711362    2480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1001 12:05:21.717331    2480 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1001 12:05:21.806887    2480 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1001 12:05:21.893314    2480 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 12:05:21.982520    2480 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1001 12:05:21.989639    2480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1001 12:05:21.995453    2480 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 12:05:22.081391    2480 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1001 12:05:22.111420    2480 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1001 12:05:22.111531    2480 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1001 12:05:22.114079    2480 start.go:563] Will wait 60s for crictl version
	I1001 12:05:22.114139    2480 ssh_runner.go:195] Run: which crictl
	I1001 12:05:22.115843    2480 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 12:05:22.128202    2480 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1001 12:05:22.128287    2480 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1001 12:05:22.135321    2480 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1001 12:05:22.152985    2480 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I1001 12:05:22.153095    2480 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I1001 12:05:22.157976    2480 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1001 12:05:22.162966    2480 kubeadm.go:883] updating cluster {Name:functional-755000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.31.1 ClusterName:functional-755000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1001 12:05:22.163033    2480 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 12:05:22.163101    2480 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1001 12:05:22.168954    2480 docker.go:685] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-755000
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1001 12:05:22.168959    2480 docker.go:615] Images already preloaded, skipping extraction
	I1001 12:05:22.169022    2480 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1001 12:05:22.174653    2480 docker.go:685] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-755000
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1001 12:05:22.174658    2480 cache_images.go:84] Images are preloaded, skipping loading
	I1001 12:05:22.174662    2480 kubeadm.go:934] updating node { 192.168.105.4 8441 v1.31.1 docker true true} ...
	I1001 12:05:22.174714    2480 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-755000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:functional-755000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 12:05:22.174774    2480 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1001 12:05:22.189614    2480 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1001 12:05:22.189623    2480 cni.go:84] Creating CNI manager for ""
	I1001 12:05:22.189629    2480 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 12:05:22.189633    2480 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1001 12:05:22.189641    2480 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.4 APIServerPort:8441 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-755000 NodeName:functional-755000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOp
ts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1001 12:05:22.189711    2480 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.4
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-755000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1001 12:05:22.189778    2480 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1001 12:05:22.193404    2480 binaries.go:44] Found k8s binaries, skipping transfer
	I1001 12:05:22.193429    2480 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1001 12:05:22.197110    2480 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1001 12:05:22.203750    2480 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 12:05:22.209584    2480 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2012 bytes)
	I1001 12:05:22.215479    2480 ssh_runner.go:195] Run: grep 192.168.105.4	control-plane.minikube.internal$ /etc/hosts
	I1001 12:05:22.217039    2480 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 12:05:22.304189    2480 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 12:05:22.310361    2480 certs.go:68] Setting up /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/functional-755000 for IP: 192.168.105.4
	I1001 12:05:22.310372    2480 certs.go:194] generating shared ca certs ...
	I1001 12:05:22.310384    2480 certs.go:226] acquiring lock for ca certs: {Name:mk17296519b35110345119718efed98a68b82ed2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 12:05:22.310539    2480 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19736-1073/.minikube/ca.key
	I1001 12:05:22.310589    2480 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19736-1073/.minikube/proxy-client-ca.key
	I1001 12:05:22.310593    2480 certs.go:256] generating profile certs ...
	I1001 12:05:22.310646    2480 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/functional-755000/client.key
	I1001 12:05:22.310691    2480 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/functional-755000/apiserver.key.233c9fe9
	I1001 12:05:22.310734    2480 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/functional-755000/proxy-client.key
	I1001 12:05:22.310876    2480 certs.go:484] found cert: /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/1595.pem (1338 bytes)
	W1001 12:05:22.310904    2480 certs.go:480] ignoring /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/1595_empty.pem, impossibly tiny 0 bytes
	I1001 12:05:22.310908    2480 certs.go:484] found cert: /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca-key.pem (1675 bytes)
	I1001 12:05:22.310927    2480 certs.go:484] found cert: /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem (1078 bytes)
	I1001 12:05:22.310949    2480 certs.go:484] found cert: /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/cert.pem (1123 bytes)
	I1001 12:05:22.310965    2480 certs.go:484] found cert: /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/key.pem (1675 bytes)
	I1001 12:05:22.310999    2480 certs.go:484] found cert: /Users/jenkins/minikube-integration/19736-1073/.minikube/files/etc/ssl/certs/15952.pem (1708 bytes)
	I1001 12:05:22.311339    2480 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 12:05:22.319831    2480 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 12:05:22.328423    2480 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 12:05:22.336925    2480 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1001 12:05:22.345215    2480 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/functional-755000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1001 12:05:22.353340    2480 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/functional-755000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1001 12:05:22.361572    2480 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/functional-755000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 12:05:22.369830    2480 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/functional-755000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1001 12:05:22.377742    2480 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 12:05:22.385636    2480 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/1595.pem --> /usr/share/ca-certificates/1595.pem (1338 bytes)
	I1001 12:05:22.393827    2480 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/files/etc/ssl/certs/15952.pem --> /usr/share/ca-certificates/15952.pem (1708 bytes)
	I1001 12:05:22.402051    2480 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1001 12:05:22.407915    2480 ssh_runner.go:195] Run: openssl version
	I1001 12:05:22.409806    2480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 12:05:22.413709    2480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 12:05:22.415309    2480 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 18:47 /usr/share/ca-certificates/minikubeCA.pem
	I1001 12:05:22.415336    2480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 12:05:22.417261    2480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 12:05:22.421033    2480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1595.pem && ln -fs /usr/share/ca-certificates/1595.pem /etc/ssl/certs/1595.pem"
	I1001 12:05:22.424862    2480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1595.pem
	I1001 12:05:22.426443    2480 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 19:02 /usr/share/ca-certificates/1595.pem
	I1001 12:05:22.426472    2480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1595.pem
	I1001 12:05:22.428364    2480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1595.pem /etc/ssl/certs/51391683.0"
	I1001 12:05:22.431975    2480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15952.pem && ln -fs /usr/share/ca-certificates/15952.pem /etc/ssl/certs/15952.pem"
	I1001 12:05:22.435692    2480 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15952.pem
	I1001 12:05:22.437094    2480 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 19:02 /usr/share/ca-certificates/15952.pem
	I1001 12:05:22.437115    2480 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15952.pem
	I1001 12:05:22.439060    2480 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15952.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 12:05:22.442194    2480 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 12:05:22.443706    2480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1001 12:05:22.445601    2480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1001 12:05:22.447656    2480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1001 12:05:22.449663    2480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1001 12:05:22.451679    2480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1001 12:05:22.453754    2480 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1001 12:05:22.455778    2480 kubeadm.go:392] StartCluster: {Name:functional-755000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:functional-755000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 12:05:22.455857    2480 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1001 12:05:22.462103    2480 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1001 12:05:22.465546    2480 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1001 12:05:22.465555    2480 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1001 12:05:22.465586    2480 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1001 12:05:22.469111    2480 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1001 12:05:22.469387    2480 kubeconfig.go:125] found "functional-755000" server: "https://192.168.105.4:8441"
	I1001 12:05:22.470310    2480 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1001 12:05:22.474061    2480 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I1001 12:05:22.474064    2480 kubeadm.go:1160] stopping kube-system containers ...
	I1001 12:05:22.474125    2480 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1001 12:05:22.481788    2480 docker.go:483] Stopping containers: [6ccf69b3b38e f3d47f75396a bd86ebb608c0 c7836ff39d26 e8c46895c9e7 60dafa289868 e2a1f3a1a703 0e1f86ad163d d6c9f9e1c40b cc8d19e7cae4 999e614e88d3 f31b79d35f43 861c7d73c709 2d50e571c0c3 1b094618de54 bef1c78a07f6 3ca6cf487b61 dc22b7bd29eb e73c525c0501 05f1eea056eb 6cbe08f7bf0c 41871d95e4b0 c792f4cc06cf 9d3146076f8d 2592ea0098d8 429dd51f4982 69acf812ebf0 f352f257ed71]
	I1001 12:05:22.481870    2480 ssh_runner.go:195] Run: docker stop 6ccf69b3b38e f3d47f75396a bd86ebb608c0 c7836ff39d26 e8c46895c9e7 60dafa289868 e2a1f3a1a703 0e1f86ad163d d6c9f9e1c40b cc8d19e7cae4 999e614e88d3 f31b79d35f43 861c7d73c709 2d50e571c0c3 1b094618de54 bef1c78a07f6 3ca6cf487b61 dc22b7bd29eb e73c525c0501 05f1eea056eb 6cbe08f7bf0c 41871d95e4b0 c792f4cc06cf 9d3146076f8d 2592ea0098d8 429dd51f4982 69acf812ebf0 f352f257ed71
	I1001 12:05:22.488685    2480 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1001 12:05:22.591630    2480 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 12:05:22.597874    2480 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5651 Oct  1 19:03 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Oct  1 19:04 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Oct  1 19:03 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5605 Oct  1 19:04 /etc/kubernetes/scheduler.conf
	
	I1001 12:05:22.597913    2480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1001 12:05:22.602544    2480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1001 12:05:22.607157    2480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1001 12:05:22.611718    2480 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1001 12:05:22.611754    2480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 12:05:22.616015    2480 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1001 12:05:22.619876    2480 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1001 12:05:22.619909    2480 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 12:05:22.623814    2480 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 12:05:22.627792    2480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 12:05:22.647398    2480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 12:05:23.165459    2480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1001 12:05:23.266836    2480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 12:05:23.300376    2480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1001 12:05:23.347950    2480 api_server.go:52] waiting for apiserver process to appear ...
	I1001 12:05:23.348009    2480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 12:05:23.851073    2480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 12:05:24.350156    2480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 12:05:24.356785    2480 api_server.go:72] duration metric: took 1.008841125s to wait for apiserver process to appear ...
	I1001 12:05:24.356792    2480 api_server.go:88] waiting for apiserver healthz status ...
	I1001 12:05:24.356807    2480 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I1001 12:05:26.345739    2480 api_server.go:279] https://192.168.105.4:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1001 12:05:26.345748    2480 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1001 12:05:26.345753    2480 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I1001 12:05:26.386871    2480 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1001 12:05:26.386880    2480 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1001 12:05:26.386886    2480 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I1001 12:05:26.389840    2480 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1001 12:05:26.389846    2480 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1001 12:05:26.858966    2480 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I1001 12:05:26.873075    2480 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1001 12:05:26.873103    2480 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1001 12:05:27.357174    2480 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I1001 12:05:27.364196    2480 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1001 12:05:27.364207    2480 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1001 12:05:27.858836    2480 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I1001 12:05:27.861543    2480 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I1001 12:05:27.865661    2480 api_server.go:141] control plane version: v1.31.1
	I1001 12:05:27.865668    2480 api_server.go:131] duration metric: took 3.508896375s to wait for apiserver health ...
	I1001 12:05:27.865672    2480 cni.go:84] Creating CNI manager for ""
	I1001 12:05:27.865678    2480 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 12:05:27.940410    2480 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1001 12:05:27.943312    2480 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1001 12:05:27.947273    2480 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1001 12:05:27.952773    2480 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 12:05:27.957304    2480 system_pods.go:59] 7 kube-system pods found
	I1001 12:05:27.957312    2480 system_pods.go:61] "coredns-7c65d6cfc9-d2x76" [4d63e17e-870f-4fe4-a84c-f2ff1a92f0cf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1001 12:05:27.957315    2480 system_pods.go:61] "etcd-functional-755000" [ae634b48-2fef-4648-aa4a-19a2e957ae43] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1001 12:05:27.957318    2480 system_pods.go:61] "kube-apiserver-functional-755000" [ded10438-176b-4b1c-9524-7fd7d7845da3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1001 12:05:27.957320    2480 system_pods.go:61] "kube-controller-manager-functional-755000" [7b989a28-3c91-46d3-9881-c0983da898a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1001 12:05:27.957323    2480 system_pods.go:61] "kube-proxy-79fmc" [7da41176-405a-4b56-a762-385bade03ed3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1001 12:05:27.957325    2480 system_pods.go:61] "kube-scheduler-functional-755000" [0ff2fee9-b564-4fcf-80d1-3d60ea83a2d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1001 12:05:27.957327    2480 system_pods.go:61] "storage-provisioner" [fd8002ce-0707-45df-a916-46489d2b6404] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1001 12:05:27.957329    2480 system_pods.go:74] duration metric: took 4.552625ms to wait for pod list to return data ...
	I1001 12:05:27.957331    2480 node_conditions.go:102] verifying NodePressure condition ...
	I1001 12:05:27.958877    2480 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 12:05:27.958882    2480 node_conditions.go:123] node cpu capacity is 2
	I1001 12:05:27.958886    2480 node_conditions.go:105] duration metric: took 1.553166ms to run NodePressure ...
	I1001 12:05:27.958893    2480 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 12:05:28.180528    2480 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1001 12:05:28.184793    2480 kubeadm.go:739] kubelet initialised
	I1001 12:05:28.184801    2480 kubeadm.go:740] duration metric: took 4.2575ms waiting for restarted kubelet to initialise ...
	I1001 12:05:28.184809    2480 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 12:05:28.189313    2480 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-d2x76" in "kube-system" namespace to be "Ready" ...
	I1001 12:05:30.196646    2480 pod_ready.go:103] pod "coredns-7c65d6cfc9-d2x76" in "kube-system" namespace has status "Ready":"False"
	I1001 12:05:32.205637    2480 pod_ready.go:103] pod "coredns-7c65d6cfc9-d2x76" in "kube-system" namespace has status "Ready":"False"
	I1001 12:05:34.704757    2480 pod_ready.go:93] pod "coredns-7c65d6cfc9-d2x76" in "kube-system" namespace has status "Ready":"True"
	I1001 12:05:34.704781    2480 pod_ready.go:82] duration metric: took 6.515497458s for pod "coredns-7c65d6cfc9-d2x76" in "kube-system" namespace to be "Ready" ...
	I1001 12:05:34.704802    2480 pod_ready.go:79] waiting up to 4m0s for pod "etcd-functional-755000" in "kube-system" namespace to be "Ready" ...
	I1001 12:05:34.712897    2480 pod_ready.go:93] pod "etcd-functional-755000" in "kube-system" namespace has status "Ready":"True"
	I1001 12:05:34.712907    2480 pod_ready.go:82] duration metric: took 8.097709ms for pod "etcd-functional-755000" in "kube-system" namespace to be "Ready" ...
	I1001 12:05:34.712917    2480 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-functional-755000" in "kube-system" namespace to be "Ready" ...
	I1001 12:05:36.727613    2480 pod_ready.go:103] pod "kube-apiserver-functional-755000" in "kube-system" namespace has status "Ready":"False"
	I1001 12:05:38.729586    2480 pod_ready.go:103] pod "kube-apiserver-functional-755000" in "kube-system" namespace has status "Ready":"False"
	I1001 12:05:41.228112    2480 pod_ready.go:103] pod "kube-apiserver-functional-755000" in "kube-system" namespace has status "Ready":"False"
	I1001 12:05:42.728635    2480 pod_ready.go:93] pod "kube-apiserver-functional-755000" in "kube-system" namespace has status "Ready":"True"
	I1001 12:05:42.728657    2480 pod_ready.go:82] duration metric: took 8.015780875s for pod "kube-apiserver-functional-755000" in "kube-system" namespace to be "Ready" ...
	I1001 12:05:42.728675    2480 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-functional-755000" in "kube-system" namespace to be "Ready" ...
	I1001 12:05:42.736074    2480 pod_ready.go:93] pod "kube-controller-manager-functional-755000" in "kube-system" namespace has status "Ready":"True"
	I1001 12:05:42.736084    2480 pod_ready.go:82] duration metric: took 7.401ms for pod "kube-controller-manager-functional-755000" in "kube-system" namespace to be "Ready" ...
	I1001 12:05:42.736094    2480 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-79fmc" in "kube-system" namespace to be "Ready" ...
	I1001 12:05:42.742599    2480 pod_ready.go:93] pod "kube-proxy-79fmc" in "kube-system" namespace has status "Ready":"True"
	I1001 12:05:42.742614    2480 pod_ready.go:82] duration metric: took 6.513625ms for pod "kube-proxy-79fmc" in "kube-system" namespace to be "Ready" ...
	I1001 12:05:42.742629    2480 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-functional-755000" in "kube-system" namespace to be "Ready" ...
	I1001 12:05:42.748880    2480 pod_ready.go:93] pod "kube-scheduler-functional-755000" in "kube-system" namespace has status "Ready":"True"
	I1001 12:05:42.748889    2480 pod_ready.go:82] duration metric: took 6.252917ms for pod "kube-scheduler-functional-755000" in "kube-system" namespace to be "Ready" ...
	I1001 12:05:42.748898    2480 pod_ready.go:39] duration metric: took 14.564175208s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 12:05:42.748922    2480 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1001 12:05:42.759393    2480 ops.go:34] apiserver oom_adj: -16
	I1001 12:05:42.759401    2480 kubeadm.go:597] duration metric: took 20.293972167s to restartPrimaryControlPlane
	I1001 12:05:42.759408    2480 kubeadm.go:394] duration metric: took 20.303760917s to StartCluster
	I1001 12:05:42.759423    2480 settings.go:142] acquiring lock: {Name:mk456a8b96b1746a679d3a85129b9d4d9b38bdfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 12:05:42.759611    2480 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19736-1073/kubeconfig
	I1001 12:05:42.760279    2480 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19736-1073/kubeconfig: {Name:mkdfe60702c76fe804796a27b08676f2ebb5427f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 12:05:42.760771    2480 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 12:05:42.760789    2480 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1001 12:05:42.760856    2480 addons.go:69] Setting storage-provisioner=true in profile "functional-755000"
	I1001 12:05:42.760869    2480 addons.go:234] Setting addon storage-provisioner=true in "functional-755000"
	W1001 12:05:42.760873    2480 addons.go:243] addon storage-provisioner should already be in state true
	I1001 12:05:42.760891    2480 config.go:182] Loaded profile config "functional-755000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:05:42.760895    2480 host.go:66] Checking if "functional-755000" exists ...
	I1001 12:05:42.760900    2480 addons.go:69] Setting default-storageclass=true in profile "functional-755000"
	I1001 12:05:42.760923    2480 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-755000"
	I1001 12:05:42.762606    2480 addons.go:234] Setting addon default-storageclass=true in "functional-755000"
	W1001 12:05:42.762611    2480 addons.go:243] addon default-storageclass should already be in state true
	I1001 12:05:42.762621    2480 host.go:66] Checking if "functional-755000" exists ...
	I1001 12:05:42.765899    2480 out.go:177] * Verifying Kubernetes components...
	I1001 12:05:42.766487    2480 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1001 12:05:42.770674    2480 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1001 12:05:42.770685    2480 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/functional-755000/id_rsa Username:docker}
	I1001 12:05:42.774738    2480 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 12:05:42.778863    2480 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 12:05:42.781794    2480 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 12:05:42.781799    2480 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1001 12:05:42.781806    2480 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/functional-755000/id_rsa Username:docker}
	I1001 12:05:42.883660    2480 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 12:05:42.890051    2480 node_ready.go:35] waiting up to 6m0s for node "functional-755000" to be "Ready" ...
	I1001 12:05:42.891578    2480 node_ready.go:49] node "functional-755000" has status "Ready":"True"
	I1001 12:05:42.891587    2480 node_ready.go:38] duration metric: took 1.522333ms for node "functional-755000" to be "Ready" ...
	I1001 12:05:42.891589    2480 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 12:05:42.893947    2480 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-d2x76" in "kube-system" namespace to be "Ready" ...
	I1001 12:05:42.896912    2480 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1001 12:05:42.961688    2480 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 12:05:43.117725    2480 pod_ready.go:93] pod "coredns-7c65d6cfc9-d2x76" in "kube-system" namespace has status "Ready":"True"
	I1001 12:05:43.117731    2480 pod_ready.go:82] duration metric: took 223.780625ms for pod "coredns-7c65d6cfc9-d2x76" in "kube-system" namespace to be "Ready" ...
	I1001 12:05:43.117735    2480 pod_ready.go:79] waiting up to 6m0s for pod "etcd-functional-755000" in "kube-system" namespace to be "Ready" ...
	I1001 12:05:43.225845    2480 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1001 12:05:43.231330    2480 addons.go:510] duration metric: took 470.553583ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1001 12:05:43.520157    2480 pod_ready.go:93] pod "etcd-functional-755000" in "kube-system" namespace has status "Ready":"True"
	I1001 12:05:43.520166    2480 pod_ready.go:82] duration metric: took 402.42925ms for pod "etcd-functional-755000" in "kube-system" namespace to be "Ready" ...
	I1001 12:05:43.520174    2480 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-functional-755000" in "kube-system" namespace to be "Ready" ...
	I1001 12:05:43.921581    2480 pod_ready.go:93] pod "kube-apiserver-functional-755000" in "kube-system" namespace has status "Ready":"True"
	I1001 12:05:43.921599    2480 pod_ready.go:82] duration metric: took 401.421708ms for pod "kube-apiserver-functional-755000" in "kube-system" namespace to be "Ready" ...
	I1001 12:05:43.921684    2480 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-functional-755000" in "kube-system" namespace to be "Ready" ...
	I1001 12:05:44.322821    2480 pod_ready.go:93] pod "kube-controller-manager-functional-755000" in "kube-system" namespace has status "Ready":"True"
	I1001 12:05:44.322850    2480 pod_ready.go:82] duration metric: took 401.154084ms for pod "kube-controller-manager-functional-755000" in "kube-system" namespace to be "Ready" ...
	I1001 12:05:44.322870    2480 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-79fmc" in "kube-system" namespace to be "Ready" ...
	I1001 12:05:44.724754    2480 pod_ready.go:93] pod "kube-proxy-79fmc" in "kube-system" namespace has status "Ready":"True"
	I1001 12:05:44.724785    2480 pod_ready.go:82] duration metric: took 401.901ms for pod "kube-proxy-79fmc" in "kube-system" namespace to be "Ready" ...
	I1001 12:05:44.724811    2480 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-functional-755000" in "kube-system" namespace to be "Ready" ...
	I1001 12:05:45.124758    2480 pod_ready.go:93] pod "kube-scheduler-functional-755000" in "kube-system" namespace has status "Ready":"True"
	I1001 12:05:45.124782    2480 pod_ready.go:82] duration metric: took 399.955417ms for pod "kube-scheduler-functional-755000" in "kube-system" namespace to be "Ready" ...
	I1001 12:05:45.124802    2480 pod_ready.go:39] duration metric: took 2.233217458s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 12:05:45.124855    2480 api_server.go:52] waiting for apiserver process to appear ...
	I1001 12:05:45.125090    2480 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 12:05:45.143635    2480 api_server.go:72] duration metric: took 2.382852917s to wait for apiserver process to appear ...
	I1001 12:05:45.143653    2480 api_server.go:88] waiting for apiserver healthz status ...
	I1001 12:05:45.143671    2480 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I1001 12:05:45.149955    2480 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I1001 12:05:45.151249    2480 api_server.go:141] control plane version: v1.31.1
	I1001 12:05:45.151257    2480 api_server.go:131] duration metric: took 7.6005ms to wait for apiserver health ...
	I1001 12:05:45.151263    2480 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 12:05:45.325469    2480 system_pods.go:59] 7 kube-system pods found
	I1001 12:05:45.325486    2480 system_pods.go:61] "coredns-7c65d6cfc9-d2x76" [4d63e17e-870f-4fe4-a84c-f2ff1a92f0cf] Running
	I1001 12:05:45.325490    2480 system_pods.go:61] "etcd-functional-755000" [ae634b48-2fef-4648-aa4a-19a2e957ae43] Running
	I1001 12:05:45.325494    2480 system_pods.go:61] "kube-apiserver-functional-755000" [ded10438-176b-4b1c-9524-7fd7d7845da3] Running
	I1001 12:05:45.325498    2480 system_pods.go:61] "kube-controller-manager-functional-755000" [7b989a28-3c91-46d3-9881-c0983da898a7] Running
	I1001 12:05:45.325501    2480 system_pods.go:61] "kube-proxy-79fmc" [7da41176-405a-4b56-a762-385bade03ed3] Running
	I1001 12:05:45.325503    2480 system_pods.go:61] "kube-scheduler-functional-755000" [0ff2fee9-b564-4fcf-80d1-3d60ea83a2d8] Running
	I1001 12:05:45.325506    2480 system_pods.go:61] "storage-provisioner" [fd8002ce-0707-45df-a916-46489d2b6404] Running
	I1001 12:05:45.325511    2480 system_pods.go:74] duration metric: took 174.243583ms to wait for pod list to return data ...
	I1001 12:05:45.325517    2480 default_sa.go:34] waiting for default service account to be created ...
	I1001 12:05:45.525434    2480 default_sa.go:45] found service account: "default"
	I1001 12:05:45.525460    2480 default_sa.go:55] duration metric: took 199.933333ms for default service account to be created ...
	I1001 12:05:45.525476    2480 system_pods.go:116] waiting for k8s-apps to be running ...
	I1001 12:05:45.732361    2480 system_pods.go:86] 7 kube-system pods found
	I1001 12:05:45.732398    2480 system_pods.go:89] "coredns-7c65d6cfc9-d2x76" [4d63e17e-870f-4fe4-a84c-f2ff1a92f0cf] Running
	I1001 12:05:45.732409    2480 system_pods.go:89] "etcd-functional-755000" [ae634b48-2fef-4648-aa4a-19a2e957ae43] Running
	I1001 12:05:45.732416    2480 system_pods.go:89] "kube-apiserver-functional-755000" [ded10438-176b-4b1c-9524-7fd7d7845da3] Running
	I1001 12:05:45.732422    2480 system_pods.go:89] "kube-controller-manager-functional-755000" [7b989a28-3c91-46d3-9881-c0983da898a7] Running
	I1001 12:05:45.732427    2480 system_pods.go:89] "kube-proxy-79fmc" [7da41176-405a-4b56-a762-385bade03ed3] Running
	I1001 12:05:45.732432    2480 system_pods.go:89] "kube-scheduler-functional-755000" [0ff2fee9-b564-4fcf-80d1-3d60ea83a2d8] Running
	I1001 12:05:45.732436    2480 system_pods.go:89] "storage-provisioner" [fd8002ce-0707-45df-a916-46489d2b6404] Running
	I1001 12:05:45.732451    2480 system_pods.go:126] duration metric: took 206.969166ms to wait for k8s-apps to be running ...
	I1001 12:05:45.732468    2480 system_svc.go:44] waiting for kubelet service to be running ....
	I1001 12:05:45.732763    2480 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 12:05:45.753319    2480 system_svc.go:56] duration metric: took 20.825584ms WaitForService to wait for kubelet
	I1001 12:05:45.753337    2480 kubeadm.go:582] duration metric: took 2.992565459s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 12:05:45.753358    2480 node_conditions.go:102] verifying NodePressure condition ...
	I1001 12:05:45.919447    2480 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 12:05:45.919456    2480 node_conditions.go:123] node cpu capacity is 2
	I1001 12:05:45.919465    2480 node_conditions.go:105] duration metric: took 166.104083ms to run NodePressure ...
	I1001 12:05:45.919476    2480 start.go:241] waiting for startup goroutines ...
	I1001 12:05:45.919483    2480 start.go:246] waiting for cluster config update ...
	I1001 12:05:45.919493    2480 start.go:255] writing updated cluster config ...
	I1001 12:05:45.920075    2480 ssh_runner.go:195] Run: rm -f paused
	I1001 12:05:45.966355    2480 start.go:600] kubectl: 1.29.2, cluster: 1.31.1 (minor skew: 2)
	I1001 12:05:45.970049    2480 out.go:201] 
	W1001 12:05:45.974127    2480 out.go:270] ! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.1.
	I1001 12:05:45.977041    2480 out.go:177]   - Want kubectl v1.31.1? Try 'minikube kubectl -- get pods -A'
	I1001 12:05:45.984049    2480 out.go:177] * Done! kubectl is now configured to use "functional-755000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Oct 01 19:06:25 functional-755000 dockerd[5799]: time="2024-10-01T19:06:25.354025225Z" level=info msg="shim disconnected" id=90e634beaea94ec3a41fcd2351086df92c65fdc89d14ead1e252a9c44f8c7772 namespace=moby
	Oct 01 19:06:25 functional-755000 dockerd[5799]: time="2024-10-01T19:06:25.354055933Z" level=warning msg="cleaning up after shim disconnected" id=90e634beaea94ec3a41fcd2351086df92c65fdc89d14ead1e252a9c44f8c7772 namespace=moby
	Oct 01 19:06:25 functional-755000 dockerd[5799]: time="2024-10-01T19:06:25.354060017Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 01 19:06:27 functional-755000 dockerd[5799]: time="2024-10-01T19:06:27.257120081Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 01 19:06:27 functional-755000 dockerd[5799]: time="2024-10-01T19:06:27.257413079Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 01 19:06:27 functional-755000 dockerd[5799]: time="2024-10-01T19:06:27.257437871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 01 19:06:27 functional-755000 dockerd[5799]: time="2024-10-01T19:06:27.257493412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 01 19:06:27 functional-755000 cri-dockerd[6054]: time="2024-10-01T19:06:27Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/eb3bfa34b78ce3e6878d53809bd122943900f5870c1a0e3e0c3e2106268d5400/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Oct 01 19:06:32 functional-755000 dockerd[5799]: time="2024-10-01T19:06:32.348355731Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 01 19:06:32 functional-755000 dockerd[5799]: time="2024-10-01T19:06:32.348459147Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 01 19:06:32 functional-755000 dockerd[5799]: time="2024-10-01T19:06:32.348467230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 01 19:06:32 functional-755000 dockerd[5799]: time="2024-10-01T19:06:32.348504896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 01 19:06:32 functional-755000 dockerd[5799]: time="2024-10-01T19:06:32.377280383Z" level=info msg="shim disconnected" id=57a816758345e41e6fe5641d49c240052306540edab5a5fe5ad08c5ae42f1e13 namespace=moby
	Oct 01 19:06:32 functional-755000 dockerd[5799]: time="2024-10-01T19:06:32.377331133Z" level=warning msg="cleaning up after shim disconnected" id=57a816758345e41e6fe5641d49c240052306540edab5a5fe5ad08c5ae42f1e13 namespace=moby
	Oct 01 19:06:32 functional-755000 dockerd[5799]: time="2024-10-01T19:06:32.377336216Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 01 19:06:32 functional-755000 dockerd[5793]: time="2024-10-01T19:06:32.377508881Z" level=info msg="ignoring event" container=57a816758345e41e6fe5641d49c240052306540edab5a5fe5ad08c5ae42f1e13 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 01 19:06:33 functional-755000 cri-dockerd[6054]: time="2024-10-01T19:06:33Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	Oct 01 19:06:33 functional-755000 dockerd[5799]: time="2024-10-01T19:06:33.601551799Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 01 19:06:33 functional-755000 dockerd[5799]: time="2024-10-01T19:06:33.601583257Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 01 19:06:33 functional-755000 dockerd[5799]: time="2024-10-01T19:06:33.601593840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 01 19:06:33 functional-755000 dockerd[5799]: time="2024-10-01T19:06:33.601788672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 01 19:06:33 functional-755000 dockerd[5799]: time="2024-10-01T19:06:33.649865802Z" level=info msg="shim disconnected" id=a207b7eea63af40379f915ab2f18d30fe7966dea89211a7a41cf018c86374b45 namespace=moby
	Oct 01 19:06:33 functional-755000 dockerd[5799]: time="2024-10-01T19:06:33.649897136Z" level=warning msg="cleaning up after shim disconnected" id=a207b7eea63af40379f915ab2f18d30fe7966dea89211a7a41cf018c86374b45 namespace=moby
	Oct 01 19:06:33 functional-755000 dockerd[5799]: time="2024-10-01T19:06:33.649901552Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 01 19:06:33 functional-755000 dockerd[5793]: time="2024-10-01T19:06:33.650020801Z" level=info msg="ignoring event" container=a207b7eea63af40379f915ab2f18d30fe7966dea89211a7a41cf018c86374b45 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	a207b7eea63af       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   1 second ago         Exited              mount-munger              0                   eb3bfa34b78ce       busybox-mount
	57a816758345e       72565bf5bbedf                                                                                         2 seconds ago        Exited              echoserver-arm            2                   171d9aba76c0f       hello-node-64b4f8f9ff-58hvf
	90e634beaea94       72565bf5bbedf                                                                                         9 seconds ago        Exited              echoserver-arm            2                   39ff47651023f       hello-node-connect-65d86f57f4-66n2d
	8d178f84d542d       nginx@sha256:b5d3f3e104699f0768e5ca8626914c16e52647943c65274d8a9e63072bd015bb                         22 seconds ago       Running             myfrontend                0                   2c0de7363d1db       sp-pod
	0fc9cd0fa6bab       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                         39 seconds ago       Running             nginx                     0                   b1d6a6f0b83ec       nginx-svc
	1ff48679047ad       2f6c962e7b831                                                                                         About a minute ago   Running             coredns                   2                   d01ef03823938       coredns-7c65d6cfc9-d2x76
	a5111a3e358ff       24a140c548c07                                                                                         About a minute ago   Running             kube-proxy                2                   b6c88e0c8b81e       kube-proxy-79fmc
	7ce88b5ac99ae       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       2                   ce3a0b91d5be6       storage-provisioner
	8ea646409c17c       7f8aa378bb47d                                                                                         About a minute ago   Running             kube-scheduler            2                   3716d388a5458       kube-scheduler-functional-755000
	9d6fe4342e7e2       279f381cb3736                                                                                         About a minute ago   Running             kube-controller-manager   2                   04fb884ea443d       kube-controller-manager-functional-755000
	1fe8a963f584c       27e3830e14027                                                                                         About a minute ago   Running             etcd                      2                   7e1a734d271df       etcd-functional-755000
	34f068dfa5842       d3f53a98c0a9d                                                                                         About a minute ago   Running             kube-apiserver            0                   a774d36d0f248       kube-apiserver-functional-755000
	6ccf69b3b38e1       2f6c962e7b831                                                                                         2 minutes ago        Exited              coredns                   1                   c7836ff39d261       coredns-7c65d6cfc9-d2x76
	f3d47f75396a3       ba04bb24b9575                                                                                         2 minutes ago        Exited              storage-provisioner       1                   60dafa289868c       storage-provisioner
	bd86ebb608c04       24a140c548c07                                                                                         2 minutes ago        Exited              kube-proxy                1                   e8c46895c9e7f       kube-proxy-79fmc
	e2a1f3a1a703b       27e3830e14027                                                                                         2 minutes ago        Exited              etcd                      1                   861c7d73c7094       etcd-functional-755000
	d6c9f9e1c40b3       279f381cb3736                                                                                         2 minutes ago        Exited              kube-controller-manager   1                   f31b79d35f439       kube-controller-manager-functional-755000
	cc8d19e7cae44       7f8aa378bb47d                                                                                         2 minutes ago        Exited              kube-scheduler            1                   999e614e88d3c       kube-scheduler-functional-755000
	
	
	==> coredns [1ff48679047a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:46962 - 16657 "HINFO IN 6907602508615407378.2780005665142897278. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004278786s
	[INFO] 10.244.0.1:49304 - 3399 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000086958s
	[INFO] 10.244.0.1:15722 - 51793 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000191374s
	[INFO] 10.244.0.1:64441 - 42089 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000025458s
	[INFO] 10.244.0.1:30347 - 12867 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001653448s
	[INFO] 10.244.0.1:24173 - 39540 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000129249s
	[INFO] 10.244.0.1:61483 - 58849 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000133624s
	
	
	==> coredns [6ccf69b3b38e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:44105 - 45184 "HINFO IN 211027731233170688.2649721824542737394. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.009971141s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-755000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-755000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4
	                    minikube.k8s.io/name=functional-755000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_01T12_03_28_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 19:03:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-755000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 19:06:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 19:06:27 +0000   Tue, 01 Oct 2024 19:03:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 19:06:27 +0000   Tue, 01 Oct 2024 19:03:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 19:06:27 +0000   Tue, 01 Oct 2024 19:03:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 19:06:27 +0000   Tue, 01 Oct 2024 19:03:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-755000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 8268d400250447a89e6499fa77b13a0d
	  System UUID:                8268d400250447a89e6499fa77b13a0d
	  Boot ID:                    b6dbd3de-2eff-4c7b-99bd-ca3315633fff
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox-mount                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  default                     hello-node-64b4f8f9ff-58hvf                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         17s
	  default                     hello-node-connect-65d86f57f4-66n2d          0 (0%)        0 (0%)      0 (0%)           0 (0%)         33s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 coredns-7c65d6cfc9-d2x76                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     3m2s
	  kube-system                 etcd-functional-755000                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         3m8s
	  kube-system                 kube-apiserver-functional-755000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         68s
	  kube-system                 kube-controller-manager-functional-755000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m7s
	  kube-system                 kube-proxy-79fmc                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m3s
	  kube-system                 kube-scheduler-functional-755000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m7s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m1s                 kube-proxy       
	  Normal  Starting                 67s                  kube-proxy       
	  Normal  Starting                 2m2s                 kube-proxy       
	  Normal  Starting                 3m8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m7s                 kubelet          Node functional-755000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m7s                 kubelet          Node functional-755000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m7s                 kubelet          Node functional-755000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m4s                 kubelet          Node functional-755000 status is now: NodeReady
	  Normal  RegisteredNode           3m3s                 node-controller  Node functional-755000 event: Registered Node functional-755000 in Controller
	  Normal  NodeHasSufficientPID     2m7s (x7 over 2m7s)  kubelet          Node functional-755000 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m7s (x8 over 2m7s)  kubelet          Node functional-755000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m7s (x8 over 2m7s)  kubelet          Node functional-755000 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m7s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m1s                 node-controller  Node functional-755000 event: Registered Node functional-755000 in Controller
	  Normal  Starting                 72s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  72s (x8 over 72s)    kubelet          Node functional-755000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    72s (x8 over 72s)    kubelet          Node functional-755000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     72s (x7 over 72s)    kubelet          Node functional-755000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  72s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           66s                  node-controller  Node functional-755000 event: Registered Node functional-755000 in Controller
	
	
	==> dmesg <==
	[  +4.422762] kauditd_printk_skb: 199 callbacks suppressed
	[  +9.307758] kauditd_printk_skb: 33 callbacks suppressed
	[  +6.398726] systemd-fstab-generator[4869]: Ignoring "noauto" option for root device
	[Oct 1 19:05] systemd-fstab-generator[5325]: Ignoring "noauto" option for root device
	[  +0.053834] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.105203] systemd-fstab-generator[5358]: Ignoring "noauto" option for root device
	[  +0.093778] systemd-fstab-generator[5370]: Ignoring "noauto" option for root device
	[  +0.097323] systemd-fstab-generator[5384]: Ignoring "noauto" option for root device
	[  +5.117271] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.409122] systemd-fstab-generator[6007]: Ignoring "noauto" option for root device
	[  +0.083790] systemd-fstab-generator[6019]: Ignoring "noauto" option for root device
	[  +0.092191] systemd-fstab-generator[6031]: Ignoring "noauto" option for root device
	[  +0.095419] systemd-fstab-generator[6046]: Ignoring "noauto" option for root device
	[  +0.224797] systemd-fstab-generator[6214]: Ignoring "noauto" option for root device
	[  +0.956851] systemd-fstab-generator[6336]: Ignoring "noauto" option for root device
	[  +4.407891] kauditd_printk_skb: 199 callbacks suppressed
	[  +6.838138] kauditd_printk_skb: 33 callbacks suppressed
	[  +8.354502] systemd-fstab-generator[7366]: Ignoring "noauto" option for root device
	[  +6.885963] kauditd_printk_skb: 28 callbacks suppressed
	[  +6.301024] kauditd_printk_skb: 19 callbacks suppressed
	[Oct 1 19:06] kauditd_printk_skb: 11 callbacks suppressed
	[  +6.380927] kauditd_printk_skb: 11 callbacks suppressed
	[  +8.917635] kauditd_printk_skb: 29 callbacks suppressed
	[  +7.128343] kauditd_printk_skb: 20 callbacks suppressed
	[  +7.028815] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [1fe8a963f584] <==
	{"level":"info","ts":"2024-10-01T19:05:24.422861Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-10-01T19:05:24.422919Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-01T19:05:24.422946Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-01T19:05:24.424167Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-01T19:05:24.425999Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-01T19:05:24.426059Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-10-01T19:05:24.426114Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-10-01T19:05:24.432433Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-01T19:05:24.432468Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-01T19:05:25.804766Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-10-01T19:05:25.804921Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-10-01T19:05:25.804997Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-10-01T19:05:25.805030Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-10-01T19:05:25.805087Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-10-01T19:05:25.805138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-10-01T19:05:25.805181Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-10-01T19:05:25.809850Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-755000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-01T19:05:25.809944Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-01T19:05:25.811051Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-01T19:05:25.811262Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-01T19:05:25.811639Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-01T19:05:25.814077Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-01T19:05:25.814412Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-01T19:05:25.816111Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-10-01T19:05:25.816829Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [e2a1f3a1a703] <==
	{"level":"info","ts":"2024-10-01T19:04:30.636666Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-10-01T19:04:30.636746Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-10-01T19:04:30.636785Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-10-01T19:04:30.636809Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-10-01T19:04:30.636834Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-10-01T19:04:30.636852Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-10-01T19:04:30.641358Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-755000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-01T19:04:30.641352Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-01T19:04:30.641887Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-01T19:04:30.641942Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-01T19:04:30.641492Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-01T19:04:30.644121Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-01T19:04:30.644121Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-01T19:04:30.646027Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-01T19:04:30.647750Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-10-01T19:05:09.306097Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-10-01T19:05:09.306128Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-755000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-10-01T19:05:09.306179Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-01T19:05:09.306227Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-01T19:05:09.321002Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-01T19:05:09.321029Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-10-01T19:05:09.321048Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-10-01T19:05:09.322911Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-10-01T19:05:09.322942Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-10-01T19:05:09.322945Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-755000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> kernel <==
	 19:06:35 up 3 min,  0 users,  load average: 0.48, 0.35, 0.15
	Linux functional-755000 5.10.207 #1 SMP PREEMPT Mon Sep 23 18:07:35 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [34f068dfa584] <==
	I1001 19:05:26.414421       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1001 19:05:26.414060       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1001 19:05:26.414482       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1001 19:05:26.414138       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	E1001 19:05:26.416271       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1001 19:05:26.417020       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1001 19:05:26.417948       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1001 19:05:26.417988       1 aggregator.go:171] initial CRD sync complete...
	I1001 19:05:26.418005       1 autoregister_controller.go:144] Starting autoregister controller
	I1001 19:05:26.418012       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1001 19:05:26.418014       1 cache.go:39] Caches are synced for autoregister controller
	I1001 19:05:26.444899       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1001 19:05:27.342591       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1001 19:05:27.975933       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1001 19:05:27.979680       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1001 19:05:27.993445       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1001 19:05:28.001666       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1001 19:05:28.003739       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1001 19:05:29.741646       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1001 19:05:29.791677       1 controller.go:615] quota admission added evaluator for: endpoints
	I1001 19:05:47.369602       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.99.143.39"}
	I1001 19:05:52.425844       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.100.163.215"}
	I1001 19:06:02.876826       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1001 19:06:02.923887       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.99.231.97"}
	I1001 19:06:18.202894       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.105.172.204"}
	
	
	==> kube-controller-manager [9d6fe4342e7e] <==
	I1001 19:05:30.047965       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="106.886828ms"
	I1001 19:05:30.048426       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="386.539µs"
	I1001 19:05:30.305950       1 shared_informer.go:320] Caches are synced for garbage collector
	I1001 19:05:30.354848       1 shared_informer.go:320] Caches are synced for garbage collector
	I1001 19:05:30.355194       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1001 19:05:34.443622       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="15.153175ms"
	I1001 19:05:34.443693       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="30.625µs"
	I1001 19:06:02.891531       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="12.90934ms"
	I1001 19:06:02.898899       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="7.320415ms"
	I1001 19:06:02.904965       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="6.035089ms"
	I1001 19:06:02.905133       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="32.333µs"
	I1001 19:06:09.943898       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="40.708µs"
	I1001 19:06:10.959181       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="23.416µs"
	I1001 19:06:11.989860       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="39.5µs"
	I1001 19:06:18.170557       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="8.003744ms"
	I1001 19:06:18.176287       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="5.700341ms"
	I1001 19:06:18.176493       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="13.083µs"
	I1001 19:06:18.176552       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="6.75µs"
	I1001 19:06:19.105222       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="51.916µs"
	I1001 19:06:20.143047       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="40.708µs"
	I1001 19:06:26.276124       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="62.041µs"
	I1001 19:06:27.772886       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-755000"
	I1001 19:06:32.323065       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="47.75µs"
	I1001 19:06:33.407740       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="27.375µs"
	I1001 19:06:34.436480       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="104.541µs"
	
	
	==> kube-controller-manager [d6c9f9e1c40b] <==
	I1001 19:04:34.501055       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1001 19:04:34.502190       1 shared_informer.go:320] Caches are synced for attach detach
	I1001 19:04:34.502272       1 shared_informer.go:320] Caches are synced for ReplicationController
	I1001 19:04:34.521789       1 shared_informer.go:320] Caches are synced for daemon sets
	I1001 19:04:34.521808       1 shared_informer.go:320] Caches are synced for cronjob
	I1001 19:04:34.523009       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1001 19:04:34.523051       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I1001 19:04:34.523447       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I1001 19:04:34.523530       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I1001 19:04:34.525230       1 shared_informer.go:320] Caches are synced for TTL after finished
	I1001 19:04:34.526400       1 shared_informer.go:320] Caches are synced for TTL
	I1001 19:04:34.573989       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1001 19:04:34.593421       1 shared_informer.go:320] Caches are synced for stateful set
	I1001 19:04:34.622937       1 shared_informer.go:320] Caches are synced for disruption
	I1001 19:04:34.622971       1 shared_informer.go:320] Caches are synced for deployment
	I1001 19:04:34.699503       1 shared_informer.go:320] Caches are synced for resource quota
	I1001 19:04:34.724667       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="237.273298ms"
	I1001 19:04:34.725099       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="44.124µs"
	I1001 19:04:34.727346       1 shared_informer.go:320] Caches are synced for resource quota
	I1001 19:04:35.145355       1 shared_informer.go:320] Caches are synced for garbage collector
	I1001 19:04:35.221615       1 shared_informer.go:320] Caches are synced for garbage collector
	I1001 19:04:35.221830       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1001 19:04:41.645345       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="9.04452ms"
	I1001 19:04:41.645682       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="76.957µs"
	I1001 19:05:02.043175       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-755000"
	
	
	==> kube-proxy [a5111a3e358f] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1001 19:05:27.869186       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1001 19:05:27.873134       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E1001 19:05:27.873161       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1001 19:05:27.880868       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1001 19:05:27.880884       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1001 19:05:27.880894       1 server_linux.go:169] "Using iptables Proxier"
	I1001 19:05:27.881495       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1001 19:05:27.881589       1 server.go:483] "Version info" version="v1.31.1"
	I1001 19:05:27.881594       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 19:05:27.882002       1 config.go:199] "Starting service config controller"
	I1001 19:05:27.882011       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1001 19:05:27.882020       1 config.go:105] "Starting endpoint slice config controller"
	I1001 19:05:27.882023       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1001 19:05:27.882191       1 config.go:328] "Starting node config controller"
	I1001 19:05:27.882193       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1001 19:05:27.983117       1 shared_informer.go:320] Caches are synced for node config
	I1001 19:05:27.983140       1 shared_informer.go:320] Caches are synced for service config
	I1001 19:05:27.983151       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [bd86ebb608c0] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1001 19:04:32.567130       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1001 19:04:32.570504       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E1001 19:04:32.570530       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1001 19:04:32.647327       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1001 19:04:32.647348       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1001 19:04:32.647366       1 server_linux.go:169] "Using iptables Proxier"
	I1001 19:04:32.650675       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1001 19:04:32.651087       1 server.go:483] "Version info" version="v1.31.1"
	I1001 19:04:32.651259       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 19:04:32.651913       1 config.go:199] "Starting service config controller"
	I1001 19:04:32.653105       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1001 19:04:32.652037       1 config.go:105] "Starting endpoint slice config controller"
	I1001 19:04:32.653181       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1001 19:04:32.652959       1 config.go:328] "Starting node config controller"
	I1001 19:04:32.653232       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1001 19:04:32.753674       1 shared_informer.go:320] Caches are synced for node config
	I1001 19:04:32.753675       1 shared_informer.go:320] Caches are synced for service config
	I1001 19:04:32.753687       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [8ea646409c17] <==
	I1001 19:05:24.800884       1 serving.go:386] Generated self-signed cert in-memory
	W1001 19:05:26.335236       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1001 19:05:26.335255       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1001 19:05:26.335259       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1001 19:05:26.335262       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1001 19:05:26.369602       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1001 19:05:26.369616       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 19:05:26.370526       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1001 19:05:26.370562       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1001 19:05:26.370616       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1001 19:05:26.370680       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1001 19:05:26.474056       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [cc8d19e7cae4] <==
	I1001 19:04:28.993110       1 serving.go:386] Generated self-signed cert in-memory
	W1001 19:04:31.148206       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1001 19:04:31.148341       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1001 19:04:31.148371       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1001 19:04:31.148721       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1001 19:04:31.186732       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1001 19:04:31.186860       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 19:04:31.187906       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1001 19:04:31.187996       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1001 19:04:31.188022       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1001 19:04:31.188062       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	W1001 19:04:31.192310       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1001 19:04:31.192761       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 19:04:31.192312       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1001 19:04:31.192784       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 19:04:31.192436       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1001 19:04:31.192797       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1001 19:04:31.192691       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found]
	E1001 19:04:31.192830       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": RBAC: [role.rbac.authorization.k8s.io \"extension-apiserver-authentication-reader\" not found, role.rbac.authorization.k8s.io \"system::leader-locking-kube-scheduler\" not found]" logger="UnhandledError"
	I1001 19:04:32.388468       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1001 19:05:09.313011       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 01 19:06:13 functional-755000 kubelet[6343]: I1001 19:06:13.011413    6343 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=1.2821157539999999 podStartE2EDuration="2.011397825s" podCreationTimestamp="2024-10-01 19:06:11 +0000 UTC" firstStartedPulling="2024-10-01 19:06:11.411286453 +0000 UTC m=+48.161775550" lastFinishedPulling="2024-10-01 19:06:12.140568525 +0000 UTC m=+48.891057621" observedRunningTime="2024-10-01 19:06:13.01117291 +0000 UTC m=+49.761662006" watchObservedRunningTime="2024-10-01 19:06:13.011397825 +0000 UTC m=+49.761886922"
	Oct 01 19:06:18 functional-755000 kubelet[6343]: I1001 19:06:18.299506    6343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhgk8\" (UniqueName: \"kubernetes.io/projected/ed95bd6f-bf0b-4808-839a-f26beb65f6ce-kube-api-access-rhgk8\") pod \"hello-node-64b4f8f9ff-58hvf\" (UID: \"ed95bd6f-bf0b-4808-839a-f26beb65f6ce\") " pod="default/hello-node-64b4f8f9ff-58hvf"
	Oct 01 19:06:19 functional-755000 kubelet[6343]: I1001 19:06:19.093552    6343 scope.go:117] "RemoveContainer" containerID="d4f959b287a05498873870b1bdcfc821b5854a66c4cdc56d4c6c18083baed224"
	Oct 01 19:06:20 functional-755000 kubelet[6343]: I1001 19:06:20.131083    6343 scope.go:117] "RemoveContainer" containerID="d4f959b287a05498873870b1bdcfc821b5854a66c4cdc56d4c6c18083baed224"
	Oct 01 19:06:20 functional-755000 kubelet[6343]: I1001 19:06:20.131504    6343 scope.go:117] "RemoveContainer" containerID="e5f3886d087e089a8eb1ae031b35efb6e7fe5d4b2bd9626bd863905c1ac606a6"
	Oct 01 19:06:20 functional-755000 kubelet[6343]: E1001 19:06:20.131784    6343 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 10s restarting failed container=echoserver-arm pod=hello-node-64b4f8f9ff-58hvf_default(ed95bd6f-bf0b-4808-839a-f26beb65f6ce)\"" pod="default/hello-node-64b4f8f9ff-58hvf" podUID="ed95bd6f-bf0b-4808-839a-f26beb65f6ce"
	Oct 01 19:06:23 functional-755000 kubelet[6343]: E1001 19:06:23.321203    6343 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 01 19:06:23 functional-755000 kubelet[6343]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 01 19:06:23 functional-755000 kubelet[6343]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 01 19:06:23 functional-755000 kubelet[6343]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 01 19:06:23 functional-755000 kubelet[6343]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 01 19:06:23 functional-755000 kubelet[6343]: I1001 19:06:23.393051    6343 scope.go:117] "RemoveContainer" containerID="0e1f86ad163d95aecbaee4336a3b1d26b84b01e4069b8183e84843c68e25578e"
	Oct 01 19:06:25 functional-755000 kubelet[6343]: I1001 19:06:25.292251    6343 scope.go:117] "RemoveContainer" containerID="74bccdacb7fe785c0bfa2b2900801660fe8525d0f75c2128e066dcfdeb5614db"
	Oct 01 19:06:26 functional-755000 kubelet[6343]: I1001 19:06:26.259891    6343 scope.go:117] "RemoveContainer" containerID="74bccdacb7fe785c0bfa2b2900801660fe8525d0f75c2128e066dcfdeb5614db"
	Oct 01 19:06:26 functional-755000 kubelet[6343]: I1001 19:06:26.260303    6343 scope.go:117] "RemoveContainer" containerID="90e634beaea94ec3a41fcd2351086df92c65fdc89d14ead1e252a9c44f8c7772"
	Oct 01 19:06:26 functional-755000 kubelet[6343]: E1001 19:06:26.260555    6343 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-66n2d_default(1bb2f1ce-f87f-4e23-9f90-d6f2a02dcbb2)\"" pod="default/hello-node-connect-65d86f57f4-66n2d" podUID="1bb2f1ce-f87f-4e23-9f90-d6f2a02dcbb2"
	Oct 01 19:06:26 functional-755000 kubelet[6343]: I1001 19:06:26.780308    6343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/3539f911-8fc5-4047-b105-bdd9567e63d8-test-volume\") pod \"busybox-mount\" (UID: \"3539f911-8fc5-4047-b105-bdd9567e63d8\") " pod="default/busybox-mount"
	Oct 01 19:06:26 functional-755000 kubelet[6343]: I1001 19:06:26.780365    6343 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mdbt\" (UniqueName: \"kubernetes.io/projected/3539f911-8fc5-4047-b105-bdd9567e63d8-kube-api-access-8mdbt\") pod \"busybox-mount\" (UID: \"3539f911-8fc5-4047-b105-bdd9567e63d8\") " pod="default/busybox-mount"
	Oct 01 19:06:27 functional-755000 kubelet[6343]: I1001 19:06:27.307500    6343 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb3bfa34b78ce3e6878d53809bd122943900f5870c1a0e3e0c3e2106268d5400"
	Oct 01 19:06:32 functional-755000 kubelet[6343]: I1001 19:06:32.293277    6343 scope.go:117] "RemoveContainer" containerID="e5f3886d087e089a8eb1ae031b35efb6e7fe5d4b2bd9626bd863905c1ac606a6"
	Oct 01 19:06:33 functional-755000 kubelet[6343]: I1001 19:06:33.402162    6343 scope.go:117] "RemoveContainer" containerID="e5f3886d087e089a8eb1ae031b35efb6e7fe5d4b2bd9626bd863905c1ac606a6"
	Oct 01 19:06:33 functional-755000 kubelet[6343]: I1001 19:06:33.402311    6343 scope.go:117] "RemoveContainer" containerID="57a816758345e41e6fe5641d49c240052306540edab5a5fe5ad08c5ae42f1e13"
	Oct 01 19:06:33 functional-755000 kubelet[6343]: E1001 19:06:33.402377    6343 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-64b4f8f9ff-58hvf_default(ed95bd6f-bf0b-4808-839a-f26beb65f6ce)\"" pod="default/hello-node-64b4f8f9ff-58hvf" podUID="ed95bd6f-bf0b-4808-839a-f26beb65f6ce"
	Oct 01 19:06:34 functional-755000 kubelet[6343]: I1001 19:06:34.424866    6343 scope.go:117] "RemoveContainer" containerID="57a816758345e41e6fe5641d49c240052306540edab5a5fe5ad08c5ae42f1e13"
	Oct 01 19:06:34 functional-755000 kubelet[6343]: E1001 19:06:34.425095    6343 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-64b4f8f9ff-58hvf_default(ed95bd6f-bf0b-4808-839a-f26beb65f6ce)\"" pod="default/hello-node-64b4f8f9ff-58hvf" podUID="ed95bd6f-bf0b-4808-839a-f26beb65f6ce"
	
	
	==> storage-provisioner [7ce88b5ac99a] <==
	I1001 19:05:27.816806       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1001 19:05:27.824734       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1001 19:05:27.825969       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1001 19:05:45.238986       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1001 19:05:45.239496       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-755000_993b4e90-2c1b-4aca-a0c1-02bd46a1d48a!
	I1001 19:05:45.240511       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6ede0b6e-0e77-4e88-ae17-c7938261c04c", APIVersion:"v1", ResourceVersion:"630", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-755000_993b4e90-2c1b-4aca-a0c1-02bd46a1d48a became leader
	I1001 19:05:45.340566       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-755000_993b4e90-2c1b-4aca-a0c1-02bd46a1d48a!
	I1001 19:05:57.307944       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I1001 19:05:57.308242       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"b2980b24-6e6a-4a30-b89f-983834e76153", APIVersion:"v1", ResourceVersion:"689", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I1001 19:05:57.308028       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    6dc2ac3c-55b5-4af2-ae57-3d750bc7c479 335 0 2024-10-01 19:03:33 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-10-01 19:03:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-b2980b24-6e6a-4a30-b89f-983834e76153 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  b2980b24-6e6a-4a30-b89f-983834e76153 689 0 2024-10-01 19:05:57 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-10-01 19:05:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-10-01 19:05:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I1001 19:05:57.308549       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-b2980b24-6e6a-4a30-b89f-983834e76153" provisioned
	I1001 19:05:57.308592       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I1001 19:05:57.308605       1 volume_store.go:212] Trying to save persistentvolume "pvc-b2980b24-6e6a-4a30-b89f-983834e76153"
	I1001 19:05:57.313994       1 volume_store.go:219] persistentvolume "pvc-b2980b24-6e6a-4a30-b89f-983834e76153" saved
	I1001 19:05:57.314101       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"b2980b24-6e6a-4a30-b89f-983834e76153", APIVersion:"v1", ResourceVersion:"689", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-b2980b24-6e6a-4a30-b89f-983834e76153
	
	
	==> storage-provisioner [f3d47f75396a] <==
	I1001 19:04:32.542677       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1001 19:04:32.552658       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1001 19:04:32.552952       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1001 19:04:49.963628       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1001 19:04:49.963788       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-755000_087eaab3-ed15-4fc6-be6e-efd05ba9fc60!
	I1001 19:04:49.964803       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6ede0b6e-0e77-4e88-ae17-c7938261c04c", APIVersion:"v1", ResourceVersion:"518", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-755000_087eaab3-ed15-4fc6-be6e-efd05ba9fc60 became leader
	I1001 19:04:50.064162       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-755000_087eaab3-ed15-4fc6-be6e-efd05ba9fc60!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-755000 -n functional-755000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-755000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-755000 describe pod busybox-mount
helpers_test.go:282: (dbg) kubectl --context functional-755000 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-755000/192.168.105.4
	Start Time:       Tue, 01 Oct 2024 12:06:26 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://a207b7eea63af40379f915ab2f18d30fe7966dea89211a7a41cf018c86374b45
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Tue, 01 Oct 2024 12:06:33 -0700
	      Finished:     Tue, 01 Oct 2024 12:06:33 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8mdbt (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-8mdbt:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  9s    default-scheduler  Successfully assigned default/busybox-mount to functional-755000
	  Normal  Pulling    8s    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     2s    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 6.247s (6.247s including waiting). Image size: 3547125 bytes.
	  Normal  Created    2s    kubelet            Created container mount-munger
	  Normal  Started    2s    kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (32.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (64.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 node stop m02 -v=7 --alsologtostderr
E1001 12:12:14.078239    1595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/functional-755000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-darwin-arm64 -p ha-268000 node stop m02 -v=7 --alsologtostderr: (12.18912775s)
ha_test.go:371: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Done: out/minikube-darwin-arm64 -p ha-268000 status -v=7 --alsologtostderr: (26.008642875s)
ha_test.go:377: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-268000 status -v=7 --alsologtostderr": 
ha_test.go:380: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-268000 status -v=7 --alsologtostderr": 
ha_test.go:383: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-268000 status -v=7 --alsologtostderr": 
ha_test.go:386: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-268000 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-268000 -n ha-268000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-268000 -n ha-268000: exit status 3 (25.976819709s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1001 12:13:11.975401    3265 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E1001 12:13:11.975409    3265 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-268000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (64.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (51.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E1001 12:13:36.000067    1595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/functional-755000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:392: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (25.978276125s)
ha_test.go:415: expected profile "ha-268000" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-268000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-268000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\
":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-268000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"
KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\"
:false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",
\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-268000 -n ha-268000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-268000 -n ha-268000: exit status 3 (25.955125084s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1001 12:14:03.907956    3278 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E1001 12:14:03.907969    3278 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-268000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (51.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (87.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-268000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.083316708s)

                                                
                                                
-- stdout --
	* Starting "ha-268000-m02" control-plane node in "ha-268000" cluster
	* Restarting existing qemu2 VM for "ha-268000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-268000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:14:03.941761    3287 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:14:03.942030    3287 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:14:03.942033    3287 out.go:358] Setting ErrFile to fd 2...
	I1001 12:14:03.942036    3287 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:14:03.942175    3287 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:14:03.942427    3287 mustload.go:65] Loading cluster: ha-268000
	I1001 12:14:03.942654    3287 config.go:182] Loaded profile config "ha-268000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W1001 12:14:03.942909    3287 host.go:58] "ha-268000-m02" host status: Stopped
	I1001 12:14:03.947503    3287 out.go:177] * Starting "ha-268000-m02" control-plane node in "ha-268000" cluster
	I1001 12:14:03.951442    3287 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 12:14:03.951457    3287 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1001 12:14:03.951462    3287 cache.go:56] Caching tarball of preloaded images
	I1001 12:14:03.951543    3287 preload.go:172] Found /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 12:14:03.951549    3287 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1001 12:14:03.951606    3287 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/ha-268000/config.json ...
	I1001 12:14:03.952052    3287 start.go:360] acquireMachinesLock for ha-268000-m02: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:14:03.952119    3287 start.go:364] duration metric: took 31.917µs to acquireMachinesLock for "ha-268000-m02"
	I1001 12:14:03.952128    3287 start.go:96] Skipping create...Using existing machine configuration
	I1001 12:14:03.952131    3287 fix.go:54] fixHost starting: m02
	I1001 12:14:03.952234    3287 fix.go:112] recreateIfNeeded on ha-268000-m02: state=Stopped err=<nil>
	W1001 12:14:03.952240    3287 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 12:14:03.955440    3287 out.go:177] * Restarting existing qemu2 VM for "ha-268000-m02" ...
	I1001 12:14:03.959408    3287 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:14:03.959449    3287 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/ha-268000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/ha-268000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/ha-268000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:3b:ee:b7:eb:1a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/ha-268000-m02/disk.qcow2
	I1001 12:14:03.962274    3287 main.go:141] libmachine: STDOUT: 
	I1001 12:14:03.962289    3287 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:14:03.962315    3287 fix.go:56] duration metric: took 10.182542ms for fixHost
	I1001 12:14:03.962323    3287 start.go:83] releasing machines lock for "ha-268000-m02", held for 10.194625ms
	W1001 12:14:03.962330    3287 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 12:14:03.962357    3287 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:14:03.962361    3287 start.go:729] Will try again in 5 seconds ...
	I1001 12:14:08.963432    3287 start.go:360] acquireMachinesLock for ha-268000-m02: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:14:08.963567    3287 start.go:364] duration metric: took 103.75µs to acquireMachinesLock for "ha-268000-m02"
	I1001 12:14:08.963603    3287 start.go:96] Skipping create...Using existing machine configuration
	I1001 12:14:08.963607    3287 fix.go:54] fixHost starting: m02
	I1001 12:14:08.963761    3287 fix.go:112] recreateIfNeeded on ha-268000-m02: state=Stopped err=<nil>
	W1001 12:14:08.963767    3287 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 12:14:08.966714    3287 out.go:177] * Restarting existing qemu2 VM for "ha-268000-m02" ...
	I1001 12:14:08.970702    3287 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:14:08.970752    3287 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/ha-268000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/ha-268000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/ha-268000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:3b:ee:b7:eb:1a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/ha-268000-m02/disk.qcow2
	I1001 12:14:08.972969    3287 main.go:141] libmachine: STDOUT: 
	I1001 12:14:08.973028    3287 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:14:08.973081    3287 fix.go:56] duration metric: took 9.473ms for fixHost
	I1001 12:14:08.973090    3287 start.go:83] releasing machines lock for "ha-268000-m02", held for 9.515458ms
	W1001 12:14:08.973222    3287 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-268000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-268000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:14:08.977744    3287 out.go:201] 
	W1001 12:14:08.981724    3287 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 12:14:08.981729    3287 out.go:270] * 
	* 
	W1001 12:14:08.983455    3287 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 12:14:08.987760    3287 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:424: I1001 12:14:03.941761    3287 out.go:345] Setting OutFile to fd 1 ...
I1001 12:14:03.942030    3287 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 12:14:03.942033    3287 out.go:358] Setting ErrFile to fd 2...
I1001 12:14:03.942036    3287 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 12:14:03.942175    3287 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
I1001 12:14:03.942427    3287 mustload.go:65] Loading cluster: ha-268000
I1001 12:14:03.942654    3287 config.go:182] Loaded profile config "ha-268000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
W1001 12:14:03.942909    3287 host.go:58] "ha-268000-m02" host status: Stopped
I1001 12:14:03.947503    3287 out.go:177] * Starting "ha-268000-m02" control-plane node in "ha-268000" cluster
I1001 12:14:03.951442    3287 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I1001 12:14:03.951457    3287 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
I1001 12:14:03.951462    3287 cache.go:56] Caching tarball of preloaded images
I1001 12:14:03.951543    3287 preload.go:172] Found /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I1001 12:14:03.951549    3287 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
I1001 12:14:03.951606    3287 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/ha-268000/config.json ...
I1001 12:14:03.952052    3287 start.go:360] acquireMachinesLock for ha-268000-m02: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1001 12:14:03.952119    3287 start.go:364] duration metric: took 31.917µs to acquireMachinesLock for "ha-268000-m02"
I1001 12:14:03.952128    3287 start.go:96] Skipping create...Using existing machine configuration
I1001 12:14:03.952131    3287 fix.go:54] fixHost starting: m02
I1001 12:14:03.952234    3287 fix.go:112] recreateIfNeeded on ha-268000-m02: state=Stopped err=<nil>
W1001 12:14:03.952240    3287 fix.go:138] unexpected machine state, will restart: <nil>
I1001 12:14:03.955440    3287 out.go:177] * Restarting existing qemu2 VM for "ha-268000-m02" ...
I1001 12:14:03.959408    3287 qemu.go:418] Using hvf for hardware acceleration
I1001 12:14:03.959449    3287 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/ha-268000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/ha-268000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/ha-268000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:3b:ee:b7:eb:1a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/ha-268000-m02/disk.qcow2
I1001 12:14:03.962274    3287 main.go:141] libmachine: STDOUT: 
I1001 12:14:03.962289    3287 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1001 12:14:03.962315    3287 fix.go:56] duration metric: took 10.182542ms for fixHost
I1001 12:14:03.962323    3287 start.go:83] releasing machines lock for "ha-268000-m02", held for 10.194625ms
W1001 12:14:03.962330    3287 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1001 12:14:03.962357    3287 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1001 12:14:03.962361    3287 start.go:729] Will try again in 5 seconds ...
I1001 12:14:08.963432    3287 start.go:360] acquireMachinesLock for ha-268000-m02: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1001 12:14:08.963567    3287 start.go:364] duration metric: took 103.75µs to acquireMachinesLock for "ha-268000-m02"
I1001 12:14:08.963603    3287 start.go:96] Skipping create...Using existing machine configuration
I1001 12:14:08.963607    3287 fix.go:54] fixHost starting: m02
I1001 12:14:08.963761    3287 fix.go:112] recreateIfNeeded on ha-268000-m02: state=Stopped err=<nil>
W1001 12:14:08.963767    3287 fix.go:138] unexpected machine state, will restart: <nil>
I1001 12:14:08.966714    3287 out.go:177] * Restarting existing qemu2 VM for "ha-268000-m02" ...
I1001 12:14:08.970702    3287 qemu.go:418] Using hvf for hardware acceleration
I1001 12:14:08.970752    3287 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/ha-268000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/ha-268000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/ha-268000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:3b:ee:b7:eb:1a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/ha-268000-m02/disk.qcow2
I1001 12:14:08.972969    3287 main.go:141] libmachine: STDOUT: 
I1001 12:14:08.973028    3287 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1001 12:14:08.973081    3287 fix.go:56] duration metric: took 9.473ms for fixHost
I1001 12:14:08.973090    3287 start.go:83] releasing machines lock for "ha-268000-m02", held for 9.515458ms
W1001 12:14:08.973222    3287 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-268000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-268000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1001 12:14:08.977744    3287 out.go:201] 
W1001 12:14:08.981724    3287 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1001 12:14:08.981729    3287 out.go:270] * 
* 
W1001 12:14:08.983455    3287 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1001 12:14:08.987760    3287 out.go:201] 

                                                
                                                
ha_test.go:425: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-268000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-darwin-arm64 -p ha-268000 status -v=7 --alsologtostderr: (25.959374625s)
ha_test.go:437: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-268000 status -v=7 --alsologtostderr": 
ha_test.go:440: status says not all four hosts are running: args "out/minikube-darwin-arm64 -p ha-268000 status -v=7 --alsologtostderr": 
ha_test.go:443: status says not all four kubelets are running: args "out/minikube-darwin-arm64 -p ha-268000 status -v=7 --alsologtostderr": 
ha_test.go:446: status says not all three apiservers are running: args "out/minikube-darwin-arm64 -p ha-268000 status -v=7 --alsologtostderr": 
ha_test.go:450: (dbg) Run:  kubectl get nodes
ha_test.go:450: (dbg) Non-zero exit: kubectl get nodes: exit status 1 (30.038954041s)

                                                
                                                
** stderr ** 
	Unable to connect to the server: dial tcp 192.168.105.254:8443: i/o timeout

                                                
                                                
** /stderr **
ha_test.go:452: failed to kubectl get nodes. args "kubectl get nodes" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-268000 -n ha-268000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-268000 -n ha-268000: exit status 3 (26.009183208s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1001 12:15:30.997968    3309 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E1001 12:15:30.997983    3309 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-268000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (87.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (5.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (3.051948041s)
ha_test.go:309: expected profile "ha-268000" in json of 'profile list' to have "HAppy" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-268000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-268000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1
,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-268000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"Kub
ernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":fa
lse,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"M
ountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-268000 -n ha-268000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-268000 -n ha-268000: exit status 3 (2.561603125s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1001 12:15:36.611844    3319 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: host is down
	E1001 12:15:36.611861    3319 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: host is down

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-268000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (5.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-268000 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-268000 -v=7 --alsologtostderr
E1001 12:15:42.771919    1595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/addons-075000/client.crt: no such file or directory" logger="UnhandledError"
E1001 12:15:52.121688    1595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/functional-755000/client.crt: no such file or directory" logger="UnhandledError"
E1001 12:16:19.839974    1595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/functional-755000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-268000 -v=7 --alsologtostderr: (3m49.012280084s)
ha_test.go:469: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-268000 --wait=true -v=7 --alsologtostderr
ha_test.go:469: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-268000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.237004875s)

                                                
                                                
-- stdout --
	* [ha-268000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-268000" primary control-plane node in "ha-268000" cluster
	* Restarting existing qemu2 VM for "ha-268000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-268000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:19:25.725290    3372 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:19:25.725494    3372 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:19:25.725499    3372 out.go:358] Setting ErrFile to fd 2...
	I1001 12:19:25.725502    3372 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:19:25.725673    3372 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:19:25.727033    3372 out.go:352] Setting JSON to false
	I1001 12:19:25.748377    3372 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2930,"bootTime":1727807435,"procs":452,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1001 12:19:25.748438    3372 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 12:19:25.753790    3372 out.go:177] * [ha-268000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 12:19:25.760768    3372 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 12:19:25.760801    3372 notify.go:220] Checking for updates...
	I1001 12:19:25.767700    3372 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	I1001 12:19:25.771726    3372 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 12:19:25.775665    3372 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 12:19:25.778719    3372 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	I1001 12:19:25.781737    3372 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 12:19:25.785064    3372 config.go:182] Loaded profile config "ha-268000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:19:25.785118    3372 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 12:19:25.789676    3372 out.go:177] * Using the qemu2 driver based on existing profile
	I1001 12:19:25.796704    3372 start.go:297] selected driver: qemu2
	I1001 12:19:25.796711    3372 start.go:901] validating driver "qemu2" against &{Name:ha-268000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.31.1 ClusterName:ha-268000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass
:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 12:19:25.796830    3372 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 12:19:25.799625    3372 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 12:19:25.799653    3372 cni.go:84] Creating CNI manager for ""
	I1001 12:19:25.799680    3372 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1001 12:19:25.799732    3372 start.go:340] cluster config:
	{Name:ha-268000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-268000 Namespace:default APIServerHAVIP:192.168.
105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:fals
e inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 12:19:25.804284    3372 iso.go:125] acquiring lock: {Name:mk749d3a5db31c259cbd6465e91cf5073e7cc750 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:19:25.812621    3372 out.go:177] * Starting "ha-268000" primary control-plane node in "ha-268000" cluster
	I1001 12:19:25.816720    3372 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 12:19:25.816738    3372 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1001 12:19:25.816748    3372 cache.go:56] Caching tarball of preloaded images
	I1001 12:19:25.816814    3372 preload.go:172] Found /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 12:19:25.816820    3372 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1001 12:19:25.816900    3372 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/ha-268000/config.json ...
	I1001 12:19:25.817399    3372 start.go:360] acquireMachinesLock for ha-268000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:19:25.817437    3372 start.go:364] duration metric: took 32µs to acquireMachinesLock for "ha-268000"
	I1001 12:19:25.817446    3372 start.go:96] Skipping create...Using existing machine configuration
	I1001 12:19:25.817451    3372 fix.go:54] fixHost starting: 
	I1001 12:19:25.817577    3372 fix.go:112] recreateIfNeeded on ha-268000: state=Stopped err=<nil>
	W1001 12:19:25.817588    3372 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 12:19:25.820679    3372 out.go:177] * Restarting existing qemu2 VM for "ha-268000" ...
	I1001 12:19:25.828702    3372 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:19:25.828753    3372 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/ha-268000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/ha-268000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/ha-268000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:d7:26:22:3a:4a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/ha-268000/disk.qcow2
	I1001 12:19:25.831051    3372 main.go:141] libmachine: STDOUT: 
	I1001 12:19:25.831080    3372 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:19:25.831115    3372 fix.go:56] duration metric: took 13.662583ms for fixHost
	I1001 12:19:25.831121    3372 start.go:83] releasing machines lock for "ha-268000", held for 13.679208ms
	W1001 12:19:25.831129    3372 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 12:19:25.831160    3372 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:19:25.831165    3372 start.go:729] Will try again in 5 seconds ...
	I1001 12:19:30.833282    3372 start.go:360] acquireMachinesLock for ha-268000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:19:30.833760    3372 start.go:364] duration metric: took 367.792µs to acquireMachinesLock for "ha-268000"
	I1001 12:19:30.833931    3372 start.go:96] Skipping create...Using existing machine configuration
	I1001 12:19:30.833951    3372 fix.go:54] fixHost starting: 
	I1001 12:19:30.834736    3372 fix.go:112] recreateIfNeeded on ha-268000: state=Stopped err=<nil>
	W1001 12:19:30.834763    3372 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 12:19:30.842218    3372 out.go:177] * Restarting existing qemu2 VM for "ha-268000" ...
	I1001 12:19:30.847158    3372 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:19:30.847357    3372 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/ha-268000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/ha-268000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/ha-268000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:d7:26:22:3a:4a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/ha-268000/disk.qcow2
	I1001 12:19:30.857634    3372 main.go:141] libmachine: STDOUT: 
	I1001 12:19:30.857712    3372 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:19:30.857812    3372 fix.go:56] duration metric: took 23.861625ms for fixHost
	I1001 12:19:30.857839    3372 start.go:83] releasing machines lock for "ha-268000", held for 24.054292ms
	W1001 12:19:30.858068    3372 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-268000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-268000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:19:30.865131    3372 out.go:201] 
	W1001 12:19:30.869218    3372 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 12:19:30.869257    3372 out.go:270] * 
	* 
	W1001 12:19:30.870755    3372 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 12:19:30.879198    3372 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-268000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:474: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-268000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-268000 -n ha-268000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-268000 -n ha-268000: exit status 7 (34.38075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-268000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-268000 node delete m03 -v=7 --alsologtostderr: exit status 83 (40.815167ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-268000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-268000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:19:31.022030    3387 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:19:31.022273    3387 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:19:31.022277    3387 out.go:358] Setting ErrFile to fd 2...
	I1001 12:19:31.022279    3387 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:19:31.022405    3387 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:19:31.022664    3387 mustload.go:65] Loading cluster: ha-268000
	I1001 12:19:31.022917    3387 config.go:182] Loaded profile config "ha-268000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W1001 12:19:31.023257    3387 out.go:270] ! The control-plane node ha-268000 host is not running (will try others): state=Stopped
	! The control-plane node ha-268000 host is not running (will try others): state=Stopped
	W1001 12:19:31.023393    3387 out.go:270] ! The control-plane node ha-268000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-268000-m02 host is not running (will try others): state=Stopped
	I1001 12:19:31.028017    3387 out.go:177] * The control-plane node ha-268000-m03 host is not running: state=Stopped
	I1001 12:19:31.031053    3387 out.go:177]   To start a cluster, run: "minikube start -p ha-268000"

                                                
                                                
** /stderr **
ha_test.go:491: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-268000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:495: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 status -v=7 --alsologtostderr
ha_test.go:495: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-268000 status -v=7 --alsologtostderr: exit status 7 (30.223916ms)

                                                
                                                
-- stdout --
	ha-268000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-268000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-268000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-268000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:19:31.063103    3389 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:19:31.063275    3389 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:19:31.063279    3389 out.go:358] Setting ErrFile to fd 2...
	I1001 12:19:31.063281    3389 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:19:31.063403    3389 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:19:31.063532    3389 out.go:352] Setting JSON to false
	I1001 12:19:31.063543    3389 mustload.go:65] Loading cluster: ha-268000
	I1001 12:19:31.063583    3389 notify.go:220] Checking for updates...
	I1001 12:19:31.063801    3389 config.go:182] Loaded profile config "ha-268000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:19:31.063812    3389 status.go:174] checking status of ha-268000 ...
	I1001 12:19:31.064037    3389 status.go:371] ha-268000 host status = "Stopped" (err=<nil>)
	I1001 12:19:31.064040    3389 status.go:384] host is not running, skipping remaining checks
	I1001 12:19:31.064042    3389 status.go:176] ha-268000 status: &{Name:ha-268000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1001 12:19:31.064052    3389 status.go:174] checking status of ha-268000-m02 ...
	I1001 12:19:31.064139    3389 status.go:371] ha-268000-m02 host status = "Stopped" (err=<nil>)
	I1001 12:19:31.064142    3389 status.go:384] host is not running, skipping remaining checks
	I1001 12:19:31.064143    3389 status.go:176] ha-268000-m02 status: &{Name:ha-268000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1001 12:19:31.064147    3389 status.go:174] checking status of ha-268000-m03 ...
	I1001 12:19:31.064232    3389 status.go:371] ha-268000-m03 host status = "Stopped" (err=<nil>)
	I1001 12:19:31.064235    3389 status.go:384] host is not running, skipping remaining checks
	I1001 12:19:31.064237    3389 status.go:176] ha-268000-m03 status: &{Name:ha-268000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1001 12:19:31.064240    3389 status.go:174] checking status of ha-268000-m04 ...
	I1001 12:19:31.064336    3389 status.go:371] ha-268000-m04 host status = "Stopped" (err=<nil>)
	I1001 12:19:31.064338    3389 status.go:384] host is not running, skipping remaining checks
	I1001 12:19:31.064340    3389 status.go:176] ha-268000-m04 status: &{Name:ha-268000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:497: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-268000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-268000 -n ha-268000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-268000 -n ha-268000: exit status 7 (29.723583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-268000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:415: expected profile "ha-268000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-268000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-268000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACoun
t\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-268000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,
\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"log
viewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP
\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-268000 -n ha-268000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-268000 -n ha-268000: exit status 7 (29.275833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-268000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (202.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 stop -v=7 --alsologtostderr
E1001 12:20:42.766212    1595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/addons-075000/client.crt: no such file or directory" logger="UnhandledError"
E1001 12:20:52.115441    1595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/functional-755000/client.crt: no such file or directory" logger="UnhandledError"
E1001 12:22:05.856721    1595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/addons-075000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-darwin-arm64 -p ha-268000 stop -v=7 --alsologtostderr: (3m21.984882208s)
ha_test.go:539: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-268000 status -v=7 --alsologtostderr: exit status 7 (63.835083ms)

                                                
                                                
-- stdout --
	ha-268000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-268000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-268000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-268000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:22:53.213430    3439 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:22:53.213641    3439 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:22:53.213645    3439 out.go:358] Setting ErrFile to fd 2...
	I1001 12:22:53.213649    3439 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:22:53.213810    3439 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:22:53.213978    3439 out.go:352] Setting JSON to false
	I1001 12:22:53.213992    3439 mustload.go:65] Loading cluster: ha-268000
	I1001 12:22:53.214029    3439 notify.go:220] Checking for updates...
	I1001 12:22:53.214319    3439 config.go:182] Loaded profile config "ha-268000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:22:53.214334    3439 status.go:174] checking status of ha-268000 ...
	I1001 12:22:53.214657    3439 status.go:371] ha-268000 host status = "Stopped" (err=<nil>)
	I1001 12:22:53.214662    3439 status.go:384] host is not running, skipping remaining checks
	I1001 12:22:53.214665    3439 status.go:176] ha-268000 status: &{Name:ha-268000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1001 12:22:53.214678    3439 status.go:174] checking status of ha-268000-m02 ...
	I1001 12:22:53.214809    3439 status.go:371] ha-268000-m02 host status = "Stopped" (err=<nil>)
	I1001 12:22:53.214813    3439 status.go:384] host is not running, skipping remaining checks
	I1001 12:22:53.214816    3439 status.go:176] ha-268000-m02 status: &{Name:ha-268000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1001 12:22:53.214820    3439 status.go:174] checking status of ha-268000-m03 ...
	I1001 12:22:53.214953    3439 status.go:371] ha-268000-m03 host status = "Stopped" (err=<nil>)
	I1001 12:22:53.214957    3439 status.go:384] host is not running, skipping remaining checks
	I1001 12:22:53.214960    3439 status.go:176] ha-268000-m03 status: &{Name:ha-268000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1001 12:22:53.214967    3439 status.go:174] checking status of ha-268000-m04 ...
	I1001 12:22:53.215087    3439 status.go:371] ha-268000-m04 host status = "Stopped" (err=<nil>)
	I1001 12:22:53.215091    3439 status.go:384] host is not running, skipping remaining checks
	I1001 12:22:53.215093    3439 status.go:176] ha-268000-m04 status: &{Name:ha-268000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-268000 status -v=7 --alsologtostderr": ha-268000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-268000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-268000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-268000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-268000 status -v=7 --alsologtostderr": ha-268000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-268000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-268000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-268000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-268000 status -v=7 --alsologtostderr": ha-268000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-268000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-268000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-268000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-268000 -n ha-268000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-268000 -n ha-268000: exit status 7 (32.434167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-268000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (202.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-268000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:562: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-268000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.183470125s)

                                                
                                                
-- stdout --
	* [ha-268000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-268000" primary control-plane node in "ha-268000" cluster
	* Restarting existing qemu2 VM for "ha-268000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-268000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:22:53.276608    3443 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:22:53.276736    3443 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:22:53.276739    3443 out.go:358] Setting ErrFile to fd 2...
	I1001 12:22:53.276742    3443 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:22:53.276868    3443 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:22:53.277834    3443 out.go:352] Setting JSON to false
	I1001 12:22:53.293885    3443 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3138,"bootTime":1727807435,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1001 12:22:53.293967    3443 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 12:22:53.298959    3443 out.go:177] * [ha-268000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 12:22:53.307111    3443 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 12:22:53.307172    3443 notify.go:220] Checking for updates...
	I1001 12:22:53.315050    3443 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	I1001 12:22:53.318045    3443 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 12:22:53.322020    3443 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 12:22:53.325043    3443 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	I1001 12:22:53.328048    3443 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 12:22:53.331347    3443 config.go:182] Loaded profile config "ha-268000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:22:53.331609    3443 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 12:22:53.336022    3443 out.go:177] * Using the qemu2 driver based on existing profile
	I1001 12:22:53.343087    3443 start.go:297] selected driver: qemu2
	I1001 12:22:53.343093    3443 start.go:901] validating driver "qemu2" against &{Name:ha-268000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.31.1 ClusterName:ha-268000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storag
eclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 12:22:53.343187    3443 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 12:22:53.345452    3443 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 12:22:53.345474    3443 cni.go:84] Creating CNI manager for ""
	I1001 12:22:53.345494    3443 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1001 12:22:53.345537    3443 start.go:340] cluster config:
	{Name:ha-268000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-268000 Namespace:default APIServerHAVIP:192.168.
105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:fals
e inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 12:22:53.349154    3443 iso.go:125] acquiring lock: {Name:mk749d3a5db31c259cbd6465e91cf5073e7cc750 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:22:53.357050    3443 out.go:177] * Starting "ha-268000" primary control-plane node in "ha-268000" cluster
	I1001 12:22:53.361080    3443 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 12:22:53.361097    3443 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1001 12:22:53.361116    3443 cache.go:56] Caching tarball of preloaded images
	I1001 12:22:53.361173    3443 preload.go:172] Found /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 12:22:53.361179    3443 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1001 12:22:53.361278    3443 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/ha-268000/config.json ...
	I1001 12:22:53.361725    3443 start.go:360] acquireMachinesLock for ha-268000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:22:53.361759    3443 start.go:364] duration metric: took 27.625µs to acquireMachinesLock for "ha-268000"
	I1001 12:22:53.361767    3443 start.go:96] Skipping create...Using existing machine configuration
	I1001 12:22:53.361772    3443 fix.go:54] fixHost starting: 
	I1001 12:22:53.361895    3443 fix.go:112] recreateIfNeeded on ha-268000: state=Stopped err=<nil>
	W1001 12:22:53.361903    3443 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 12:22:53.366055    3443 out.go:177] * Restarting existing qemu2 VM for "ha-268000" ...
	I1001 12:22:53.373941    3443 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:22:53.373985    3443 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/ha-268000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/ha-268000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/ha-268000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:d7:26:22:3a:4a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/ha-268000/disk.qcow2
	I1001 12:22:53.375871    3443 main.go:141] libmachine: STDOUT: 
	I1001 12:22:53.375887    3443 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:22:53.375921    3443 fix.go:56] duration metric: took 14.1485ms for fixHost
	I1001 12:22:53.375925    3443 start.go:83] releasing machines lock for "ha-268000", held for 14.161667ms
	W1001 12:22:53.375930    3443 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 12:22:53.375969    3443 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:22:53.375973    3443 start.go:729] Will try again in 5 seconds ...
	I1001 12:22:58.378050    3443 start.go:360] acquireMachinesLock for ha-268000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:22:58.378373    3443 start.go:364] duration metric: took 257.333µs to acquireMachinesLock for "ha-268000"
	I1001 12:22:58.378493    3443 start.go:96] Skipping create...Using existing machine configuration
	I1001 12:22:58.378511    3443 fix.go:54] fixHost starting: 
	I1001 12:22:58.379228    3443 fix.go:112] recreateIfNeeded on ha-268000: state=Stopped err=<nil>
	W1001 12:22:58.379258    3443 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 12:22:58.382803    3443 out.go:177] * Restarting existing qemu2 VM for "ha-268000" ...
	I1001 12:22:58.390607    3443 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:22:58.390804    3443 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/ha-268000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/ha-268000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/ha-268000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:d7:26:22:3a:4a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/ha-268000/disk.qcow2
	I1001 12:22:58.399901    3443 main.go:141] libmachine: STDOUT: 
	I1001 12:22:58.399961    3443 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:22:58.400056    3443 fix.go:56] duration metric: took 21.53275ms for fixHost
	I1001 12:22:58.400071    3443 start.go:83] releasing machines lock for "ha-268000", held for 21.680583ms
	W1001 12:22:58.400240    3443 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-268000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-268000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:22:58.408590    3443 out.go:201] 
	W1001 12:22:58.411744    3443 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 12:22:58.411771    3443 out.go:270] * 
	* 
	W1001 12:22:58.414600    3443 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 12:22:58.420649    3443 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:564: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-268000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-268000 -n ha-268000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-268000 -n ha-268000: exit status 7 (69.2755ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-268000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:415: expected profile "ha-268000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-268000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-268000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACoun
t\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-268000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,
\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"log
viewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP
\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-268000 -n ha-268000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-268000 -n ha-268000: exit status 7 (28.890834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-268000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-268000 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-268000 --control-plane -v=7 --alsologtostderr: exit status 83 (42.5935ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-268000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-268000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:22:58.610433    3458 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:22:58.610815    3458 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:22:58.610819    3458 out.go:358] Setting ErrFile to fd 2...
	I1001 12:22:58.610822    3458 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:22:58.610991    3458 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:22:58.611219    3458 mustload.go:65] Loading cluster: ha-268000
	I1001 12:22:58.611466    3458 config.go:182] Loaded profile config "ha-268000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W1001 12:22:58.611787    3458 out.go:270] ! The control-plane node ha-268000 host is not running (will try others): state=Stopped
	! The control-plane node ha-268000 host is not running (will try others): state=Stopped
	W1001 12:22:58.611895    3458 out.go:270] ! The control-plane node ha-268000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-268000-m02 host is not running (will try others): state=Stopped
	I1001 12:22:58.616283    3458 out.go:177] * The control-plane node ha-268000-m03 host is not running: state=Stopped
	I1001 12:22:58.620223    3458 out.go:177]   To start a cluster, run: "minikube start -p ha-268000"

                                                
                                                
** /stderr **
ha_test.go:609: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-268000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-268000 -n ha-268000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-268000 -n ha-268000: exit status 7 (29.984ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-268000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:309: expected profile "ha-268000" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-268000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-268000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-268000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logvie
wer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":
\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-268000 -n ha-268000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-268000 -n ha-268000: exit status 7 (29.429834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-268000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.08s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.9s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-233000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-233000 --driver=qemu2 : exit status 80 (9.835421583s)

                                                
                                                
-- stdout --
	* [image-233000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-233000" primary control-plane node in "image-233000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-233000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-233000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-233000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-233000 -n image-233000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-233000 -n image-233000: exit status 7 (67.799916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-233000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.90s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.92s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-756000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-756000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.919515541s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"48b58fcb-0bc6-476c-83a4-e5630427bab1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-756000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5363ae21-c514-4fd1-8269-77b375f757f3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19736"}}
	{"specversion":"1.0","id":"562a1ee9-52f4-4b1b-8778-32f559f54c2c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig"}}
	{"specversion":"1.0","id":"c08ba46c-d431-42f8-87cf-6d3d3ac986d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"9f0f9c2b-353e-4401-b54e-88f58674bb2a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"69fea274-de5f-4bde-87f5-5f2ceee24bc9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube"}}
	{"specversion":"1.0","id":"1da34321-fd24-4ad1-95f1-ae483e683497","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8affa4ea-2462-4a25-958b-a83c6f1ef1eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"675d1dbc-d57b-41e7-a643-968044591313","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"ddf8c6ac-6f00-46b2-8dd5-708fd02a2bae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-756000\" primary control-plane node in \"json-output-756000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"291ea934-2ed3-4c53-af38-d3f4e80be21d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"00d38a7b-7ef6-4b3b-b013-f942605f9071","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-756000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"2c6c5ca6-4333-477a-b27b-e022366be774","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"65656a95-4fe7-466c-a672-0742ec02621b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"9ce68253-716f-4898-82d6-4e577a267ca6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-756000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"d0f3f840-a13e-4bfc-a477-b7a6d8bcbba4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"bb239ee0-1dc4-453c-a3a4-1e98f125a07e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-756000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.92s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-756000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-756000 --output=json --user=testUser: exit status 83 (76.131084ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c337d682-680e-448f-b242-a11851909516","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-756000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"5e9204b9-1688-4d44-9942-7bcd3a808aab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-756000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-756000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.04s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-756000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-756000 --output=json --user=testUser: exit status 83 (44.353417ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-756000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-756000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-756000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (10.14s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-683000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-683000 --driver=qemu2 : exit status 80 (9.85560575s)

                                                
                                                
-- stdout --
	* [first-683000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-683000" primary control-plane node in "first-683000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-683000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-683000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-683000 --driver=qemu2 ": exit status 80
panic.go:629: *** TestMinikubeProfile FAILED at 2024-10-01 12:23:31.555749 -0700 PDT m=+2242.392367251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-684000 -n second-684000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-684000 -n second-684000: exit status 85 (74.093875ms)

                                                
                                                
-- stdout --
	* Profile "second-684000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-684000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-684000" host is not running, skipping log retrieval (state="* Profile \"second-684000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-684000\"")
helpers_test.go:175: Cleaning up "second-684000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-684000
panic.go:629: *** TestMinikubeProfile FAILED at 2024-10-01 12:23:31.732246 -0700 PDT m=+2242.568867668
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-683000 -n first-683000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-683000 -n first-683000: exit status 7 (30.2635ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-683000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-683000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-683000
--- FAIL: TestMinikubeProfile (10.14s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.62s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-964000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-964000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.550105542s)

                                                
                                                
-- stdout --
	* [mount-start-1-964000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-964000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-964000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-964000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-964000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-964000 -n mount-start-1-964000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-964000 -n mount-start-1-964000: exit status 7 (66.19375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-964000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.62s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (10.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-301000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-301000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (10.083491542s)

                                                
                                                
-- stdout --
	* [multinode-301000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-301000" primary control-plane node in "multinode-301000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-301000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:23:42.669321    3598 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:23:42.669462    3598 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:23:42.669466    3598 out.go:358] Setting ErrFile to fd 2...
	I1001 12:23:42.669468    3598 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:23:42.669599    3598 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:23:42.670685    3598 out.go:352] Setting JSON to false
	I1001 12:23:42.686493    3598 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3187,"bootTime":1727807435,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1001 12:23:42.686565    3598 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 12:23:42.692590    3598 out.go:177] * [multinode-301000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 12:23:42.701401    3598 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 12:23:42.701434    3598 notify.go:220] Checking for updates...
	I1001 12:23:42.709389    3598 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	I1001 12:23:42.712418    3598 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 12:23:42.715458    3598 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 12:23:42.718370    3598 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	I1001 12:23:42.721429    3598 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 12:23:42.724519    3598 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 12:23:42.728334    3598 out.go:177] * Using the qemu2 driver based on user configuration
	I1001 12:23:42.735390    3598 start.go:297] selected driver: qemu2
	I1001 12:23:42.735396    3598 start.go:901] validating driver "qemu2" against <nil>
	I1001 12:23:42.735429    3598 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 12:23:42.737535    3598 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 12:23:42.740412    3598 out.go:177] * Automatically selected the socket_vmnet network
	I1001 12:23:42.743454    3598 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 12:23:42.743474    3598 cni.go:84] Creating CNI manager for ""
	I1001 12:23:42.743516    3598 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1001 12:23:42.743520    3598 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1001 12:23:42.743546    3598 start.go:340] cluster config:
	{Name:multinode-301000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-301000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_v
mnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 12:23:42.747085    3598 iso.go:125] acquiring lock: {Name:mk749d3a5db31c259cbd6465e91cf5073e7cc750 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:23:42.753385    3598 out.go:177] * Starting "multinode-301000" primary control-plane node in "multinode-301000" cluster
	I1001 12:23:42.757336    3598 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 12:23:42.757352    3598 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1001 12:23:42.757367    3598 cache.go:56] Caching tarball of preloaded images
	I1001 12:23:42.757436    3598 preload.go:172] Found /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 12:23:42.757442    3598 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1001 12:23:42.757679    3598 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/multinode-301000/config.json ...
	I1001 12:23:42.757691    3598 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/multinode-301000/config.json: {Name:mk3417216c4d0bc6b32908645efd9a40db18a946 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 12:23:42.757928    3598 start.go:360] acquireMachinesLock for multinode-301000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:23:42.757973    3598 start.go:364] duration metric: took 38.583µs to acquireMachinesLock for "multinode-301000"
	I1001 12:23:42.757986    3598 start.go:93] Provisioning new machine with config: &{Name:multinode-301000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:multinode-301000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 12:23:42.758021    3598 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 12:23:42.765423    3598 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 12:23:42.783637    3598 start.go:159] libmachine.API.Create for "multinode-301000" (driver="qemu2")
	I1001 12:23:42.783667    3598 client.go:168] LocalClient.Create starting
	I1001 12:23:42.783738    3598 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem
	I1001 12:23:42.783774    3598 main.go:141] libmachine: Decoding PEM data...
	I1001 12:23:42.783785    3598 main.go:141] libmachine: Parsing certificate...
	I1001 12:23:42.783835    3598 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/cert.pem
	I1001 12:23:42.783859    3598 main.go:141] libmachine: Decoding PEM data...
	I1001 12:23:42.783869    3598 main.go:141] libmachine: Parsing certificate...
	I1001 12:23:42.784238    3598 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19736-1073/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 12:23:42.944500    3598 main.go:141] libmachine: Creating SSH key...
	I1001 12:23:43.186498    3598 main.go:141] libmachine: Creating Disk image...
	I1001 12:23:43.186508    3598 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 12:23:43.186738    3598 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/multinode-301000/disk.qcow2.raw /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/multinode-301000/disk.qcow2
	I1001 12:23:43.196509    3598 main.go:141] libmachine: STDOUT: 
	I1001 12:23:43.196530    3598 main.go:141] libmachine: STDERR: 
	I1001 12:23:43.196596    3598 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/multinode-301000/disk.qcow2 +20000M
	I1001 12:23:43.204508    3598 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 12:23:43.204523    3598 main.go:141] libmachine: STDERR: 
	I1001 12:23:43.204542    3598 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/multinode-301000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/multinode-301000/disk.qcow2
	I1001 12:23:43.204550    3598 main.go:141] libmachine: Starting QEMU VM...
	I1001 12:23:43.204562    3598 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:23:43.204587    3598 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/multinode-301000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/multinode-301000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/multinode-301000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:15:d5:19:a1:af -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/multinode-301000/disk.qcow2
	I1001 12:23:43.206150    3598 main.go:141] libmachine: STDOUT: 
	I1001 12:23:43.206165    3598 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:23:43.206183    3598 client.go:171] duration metric: took 422.519083ms to LocalClient.Create
	I1001 12:23:45.208384    3598 start.go:128] duration metric: took 2.450326583s to createHost
	I1001 12:23:45.208455    3598 start.go:83] releasing machines lock for "multinode-301000", held for 2.450522417s
	W1001 12:23:45.208514    3598 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:23:45.228614    3598 out.go:177] * Deleting "multinode-301000" in qemu2 ...
	W1001 12:23:45.266880    3598 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:23:45.266897    3598 start.go:729] Will try again in 5 seconds ...
	I1001 12:23:50.269121    3598 start.go:360] acquireMachinesLock for multinode-301000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:23:50.269570    3598 start.go:364] duration metric: took 316.541µs to acquireMachinesLock for "multinode-301000"
	I1001 12:23:50.269695    3598 start.go:93] Provisioning new machine with config: &{Name:multinode-301000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:multinode-301000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 12:23:50.269970    3598 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 12:23:50.290746    3598 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 12:23:50.342398    3598 start.go:159] libmachine.API.Create for "multinode-301000" (driver="qemu2")
	I1001 12:23:50.342461    3598 client.go:168] LocalClient.Create starting
	I1001 12:23:50.342599    3598 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem
	I1001 12:23:50.342664    3598 main.go:141] libmachine: Decoding PEM data...
	I1001 12:23:50.342685    3598 main.go:141] libmachine: Parsing certificate...
	I1001 12:23:50.342767    3598 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/cert.pem
	I1001 12:23:50.342818    3598 main.go:141] libmachine: Decoding PEM data...
	I1001 12:23:50.342833    3598 main.go:141] libmachine: Parsing certificate...
	I1001 12:23:50.343397    3598 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19736-1073/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 12:23:50.512558    3598 main.go:141] libmachine: Creating SSH key...
	I1001 12:23:50.649116    3598 main.go:141] libmachine: Creating Disk image...
	I1001 12:23:50.649124    3598 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 12:23:50.649321    3598 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/multinode-301000/disk.qcow2.raw /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/multinode-301000/disk.qcow2
	I1001 12:23:50.658345    3598 main.go:141] libmachine: STDOUT: 
	I1001 12:23:50.658360    3598 main.go:141] libmachine: STDERR: 
	I1001 12:23:50.658422    3598 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/multinode-301000/disk.qcow2 +20000M
	I1001 12:23:50.666329    3598 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 12:23:50.666344    3598 main.go:141] libmachine: STDERR: 
	I1001 12:23:50.666355    3598 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/multinode-301000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/multinode-301000/disk.qcow2
	I1001 12:23:50.666360    3598 main.go:141] libmachine: Starting QEMU VM...
	I1001 12:23:50.666368    3598 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:23:50.666405    3598 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/multinode-301000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/multinode-301000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/multinode-301000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:e8:d5:20:ed:24 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/multinode-301000/disk.qcow2
	I1001 12:23:50.668015    3598 main.go:141] libmachine: STDOUT: 
	I1001 12:23:50.668029    3598 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:23:50.668041    3598 client.go:171] duration metric: took 325.582041ms to LocalClient.Create
	I1001 12:23:52.670134    3598 start.go:128] duration metric: took 2.400188417s to createHost
	I1001 12:23:52.670327    3598 start.go:83] releasing machines lock for "multinode-301000", held for 2.400780334s
	W1001 12:23:52.670581    3598 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-301000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-301000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:23:52.688479    3598 out.go:201] 
	W1001 12:23:52.699683    3598 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 12:23:52.699707    3598 out.go:270] * 
	* 
	W1001 12:23:52.702475    3598 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 12:23:52.711471    3598 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-301000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-301000 -n multinode-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-301000 -n multinode-301000: exit status 7 (66.93075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (10.15s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (80.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-301000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-301000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (132.450917ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-301000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-301000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-301000 -- rollout status deployment/busybox: exit status 1 (57.698708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-301000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-301000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-301000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (57.1195ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-301000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1001 12:23:53.042762    1595 retry.go:31] will retry after 1.155765022s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-301000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-301000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.126041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-301000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1001 12:23:54.305003    1595 retry.go:31] will retry after 1.387082869s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-301000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-301000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.574ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-301000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1001 12:23:55.796973    1595 retry.go:31] will retry after 3.214022693s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-301000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-301000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.631709ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-301000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1001 12:23:59.118012    1595 retry.go:31] will retry after 2.452669675s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-301000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-301000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.713125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-301000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1001 12:24:01.675768    1595 retry.go:31] will retry after 6.2367165s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-301000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-301000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.955375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-301000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1001 12:24:08.016796    1595 retry.go:31] will retry after 10.338853084s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-301000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-301000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.819542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-301000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1001 12:24:18.460280    1595 retry.go:31] will retry after 17.023424856s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-301000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-301000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.647583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-301000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1001 12:24:35.592470    1595 retry.go:31] will retry after 17.631404638s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-301000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-301000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.53425ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-301000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1001 12:24:53.329689    1595 retry.go:31] will retry after 20.083724087s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-301000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-301000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.6715ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-301000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-301000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-301000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.842833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-301000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-301000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-301000 -- exec  -- nslookup kubernetes.io: exit status 1 (57.026167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-301000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-301000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-301000 -- exec  -- nslookup kubernetes.default: exit status 1 (55.576042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-301000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-301000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-301000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (56.079ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-301000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-301000 -n multinode-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-301000 -n multinode-301000: exit status 7 (30.076042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (80.98s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-301000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-301000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.000458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-301000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-301000 -n multinode-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-301000 -n multinode-301000: exit status 7 (30.032958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-301000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-301000 -v 3 --alsologtostderr: exit status 83 (42.465541ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-301000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-301000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:25:13.887615    3687 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:25:13.887794    3687 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:25:13.887798    3687 out.go:358] Setting ErrFile to fd 2...
	I1001 12:25:13.887800    3687 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:25:13.887941    3687 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:25:13.888201    3687 mustload.go:65] Loading cluster: multinode-301000
	I1001 12:25:13.888409    3687 config.go:182] Loaded profile config "multinode-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:25:13.893302    3687 out.go:177] * The control-plane node multinode-301000 host is not running: state=Stopped
	I1001 12:25:13.897263    3687 out.go:177]   To start a cluster, run: "minikube start -p multinode-301000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-301000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-301000 -n multinode-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-301000 -n multinode-301000: exit status 7 (29.852042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-301000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-301000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (27.215917ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-301000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-301000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-301000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-301000 -n multinode-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-301000 -n multinode-301000: exit status 7 (29.66175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-301000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-301000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"multinode-301000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVM
NUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"multinode-301000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesV
ersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\
":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-301000 -n multinode-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-301000 -n multinode-301000: exit status 7 (29.486917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-301000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-301000 status --output json --alsologtostderr: exit status 7 (29.821458ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-301000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:25:14.095005    3699 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:25:14.095166    3699 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:25:14.095169    3699 out.go:358] Setting ErrFile to fd 2...
	I1001 12:25:14.095171    3699 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:25:14.095314    3699 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:25:14.095476    3699 out.go:352] Setting JSON to true
	I1001 12:25:14.095487    3699 mustload.go:65] Loading cluster: multinode-301000
	I1001 12:25:14.095548    3699 notify.go:220] Checking for updates...
	I1001 12:25:14.095705    3699 config.go:182] Loaded profile config "multinode-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:25:14.095723    3699 status.go:174] checking status of multinode-301000 ...
	I1001 12:25:14.095965    3699 status.go:371] multinode-301000 host status = "Stopped" (err=<nil>)
	I1001 12:25:14.095969    3699 status.go:384] host is not running, skipping remaining checks
	I1001 12:25:14.095971    3699 status.go:176] multinode-301000 status: &{Name:multinode-301000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-301000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cluster.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-301000 -n multinode-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-301000 -n multinode-301000: exit status 7 (29.40875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-301000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-301000 node stop m03: exit status 85 (47.26375ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-301000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-301000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-301000 status: exit status 7 (29.105542ms)

                                                
                                                
-- stdout --
	multinode-301000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-301000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-301000 status --alsologtostderr: exit status 7 (29.75275ms)

                                                
                                                
-- stdout --
	multinode-301000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:25:14.231528    3707 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:25:14.231693    3707 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:25:14.231696    3707 out.go:358] Setting ErrFile to fd 2...
	I1001 12:25:14.231699    3707 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:25:14.231819    3707 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:25:14.231942    3707 out.go:352] Setting JSON to false
	I1001 12:25:14.231953    3707 mustload.go:65] Loading cluster: multinode-301000
	I1001 12:25:14.232006    3707 notify.go:220] Checking for updates...
	I1001 12:25:14.232161    3707 config.go:182] Loaded profile config "multinode-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:25:14.232171    3707 status.go:174] checking status of multinode-301000 ...
	I1001 12:25:14.232384    3707 status.go:371] multinode-301000 host status = "Stopped" (err=<nil>)
	I1001 12:25:14.232388    3707 status.go:384] host is not running, skipping remaining checks
	I1001 12:25:14.232389    3707 status.go:176] multinode-301000 status: &{Name:multinode-301000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-301000 status --alsologtostderr": multinode-301000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-301000 -n multinode-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-301000 -n multinode-301000: exit status 7 (29.373583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (46.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-301000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-301000 node start m03 -v=7 --alsologtostderr: exit status 85 (45.139167ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:25:14.291089    3711 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:25:14.291341    3711 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:25:14.291344    3711 out.go:358] Setting ErrFile to fd 2...
	I1001 12:25:14.291346    3711 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:25:14.291475    3711 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:25:14.291706    3711 mustload.go:65] Loading cluster: multinode-301000
	I1001 12:25:14.291906    3711 config.go:182] Loaded profile config "multinode-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:25:14.295268    3711 out.go:201] 
	W1001 12:25:14.298299    3711 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W1001 12:25:14.298304    3711 out.go:270] * 
	* 
	W1001 12:25:14.300049    3711 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 12:25:14.303260    3711 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I1001 12:25:14.291089    3711 out.go:345] Setting OutFile to fd 1 ...
I1001 12:25:14.291341    3711 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 12:25:14.291344    3711 out.go:358] Setting ErrFile to fd 2...
I1001 12:25:14.291346    3711 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 12:25:14.291475    3711 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
I1001 12:25:14.291706    3711 mustload.go:65] Loading cluster: multinode-301000
I1001 12:25:14.291906    3711 config.go:182] Loaded profile config "multinode-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1001 12:25:14.295268    3711 out.go:201] 
W1001 12:25:14.298299    3711 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W1001 12:25:14.298304    3711 out.go:270] * 
* 
W1001 12:25:14.300049    3711 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1001 12:25:14.303260    3711 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-301000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-301000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-301000 status -v=7 --alsologtostderr: exit status 7 (29.983ms)

                                                
                                                
-- stdout --
	multinode-301000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:25:14.336433    3713 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:25:14.336601    3713 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:25:14.336604    3713 out.go:358] Setting ErrFile to fd 2...
	I1001 12:25:14.336606    3713 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:25:14.336729    3713 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:25:14.336852    3713 out.go:352] Setting JSON to false
	I1001 12:25:14.336868    3713 mustload.go:65] Loading cluster: multinode-301000
	I1001 12:25:14.336914    3713 notify.go:220] Checking for updates...
	I1001 12:25:14.337065    3713 config.go:182] Loaded profile config "multinode-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:25:14.337074    3713 status.go:174] checking status of multinode-301000 ...
	I1001 12:25:14.337311    3713 status.go:371] multinode-301000 host status = "Stopped" (err=<nil>)
	I1001 12:25:14.337315    3713 status.go:384] host is not running, skipping remaining checks
	I1001 12:25:14.337317    3713 status.go:176] multinode-301000 status: &{Name:multinode-301000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1001 12:25:14.338153    1595 retry.go:31] will retry after 682.944067ms: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-301000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-301000 status -v=7 --alsologtostderr: exit status 7 (71.291708ms)

                                                
                                                
-- stdout --
	multinode-301000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:25:15.092588    3715 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:25:15.092782    3715 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:25:15.092786    3715 out.go:358] Setting ErrFile to fd 2...
	I1001 12:25:15.092789    3715 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:25:15.092976    3715 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:25:15.093117    3715 out.go:352] Setting JSON to false
	I1001 12:25:15.093131    3715 mustload.go:65] Loading cluster: multinode-301000
	I1001 12:25:15.093164    3715 notify.go:220] Checking for updates...
	I1001 12:25:15.093398    3715 config.go:182] Loaded profile config "multinode-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:25:15.093412    3715 status.go:174] checking status of multinode-301000 ...
	I1001 12:25:15.093728    3715 status.go:371] multinode-301000 host status = "Stopped" (err=<nil>)
	I1001 12:25:15.093733    3715 status.go:384] host is not running, skipping remaining checks
	I1001 12:25:15.093736    3715 status.go:176] multinode-301000 status: &{Name:multinode-301000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1001 12:25:15.094707    1595 retry.go:31] will retry after 1.789305964s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-301000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-301000 status -v=7 --alsologtostderr: exit status 7 (72.398291ms)

                                                
                                                
-- stdout --
	multinode-301000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:25:16.956579    3717 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:25:16.956768    3717 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:25:16.956773    3717 out.go:358] Setting ErrFile to fd 2...
	I1001 12:25:16.956776    3717 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:25:16.956978    3717 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:25:16.957129    3717 out.go:352] Setting JSON to false
	I1001 12:25:16.957142    3717 mustload.go:65] Loading cluster: multinode-301000
	I1001 12:25:16.957183    3717 notify.go:220] Checking for updates...
	I1001 12:25:16.957422    3717 config.go:182] Loaded profile config "multinode-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:25:16.957434    3717 status.go:174] checking status of multinode-301000 ...
	I1001 12:25:16.957738    3717 status.go:371] multinode-301000 host status = "Stopped" (err=<nil>)
	I1001 12:25:16.957743    3717 status.go:384] host is not running, skipping remaining checks
	I1001 12:25:16.957745    3717 status.go:176] multinode-301000 status: &{Name:multinode-301000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1001 12:25:16.958761    1595 retry.go:31] will retry after 1.386394332s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-301000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-301000 status -v=7 --alsologtostderr: exit status 7 (72.196959ms)

                                                
                                                
-- stdout --
	multinode-301000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:25:18.417441    3719 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:25:18.417643    3719 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:25:18.417648    3719 out.go:358] Setting ErrFile to fd 2...
	I1001 12:25:18.417651    3719 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:25:18.417819    3719 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:25:18.417979    3719 out.go:352] Setting JSON to false
	I1001 12:25:18.417994    3719 mustload.go:65] Loading cluster: multinode-301000
	I1001 12:25:18.418034    3719 notify.go:220] Checking for updates...
	I1001 12:25:18.418278    3719 config.go:182] Loaded profile config "multinode-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:25:18.418290    3719 status.go:174] checking status of multinode-301000 ...
	I1001 12:25:18.418624    3719 status.go:371] multinode-301000 host status = "Stopped" (err=<nil>)
	I1001 12:25:18.418629    3719 status.go:384] host is not running, skipping remaining checks
	I1001 12:25:18.418631    3719 status.go:176] multinode-301000 status: &{Name:multinode-301000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1001 12:25:18.419691    1595 retry.go:31] will retry after 3.809332594s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-301000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-301000 status -v=7 --alsologtostderr: exit status 7 (73.607208ms)

                                                
                                                
-- stdout --
	multinode-301000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:25:22.302462    3721 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:25:22.302678    3721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:25:22.302683    3721 out.go:358] Setting ErrFile to fd 2...
	I1001 12:25:22.302686    3721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:25:22.302884    3721 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:25:22.303095    3721 out.go:352] Setting JSON to false
	I1001 12:25:22.303110    3721 mustload.go:65] Loading cluster: multinode-301000
	I1001 12:25:22.303156    3721 notify.go:220] Checking for updates...
	I1001 12:25:22.303423    3721 config.go:182] Loaded profile config "multinode-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:25:22.303436    3721 status.go:174] checking status of multinode-301000 ...
	I1001 12:25:22.303764    3721 status.go:371] multinode-301000 host status = "Stopped" (err=<nil>)
	I1001 12:25:22.303769    3721 status.go:384] host is not running, skipping remaining checks
	I1001 12:25:22.303772    3721 status.go:176] multinode-301000 status: &{Name:multinode-301000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1001 12:25:22.304926    1595 retry.go:31] will retry after 6.795745498s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-301000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-301000 status -v=7 --alsologtostderr: exit status 7 (72.518625ms)

                                                
                                                
-- stdout --
	multinode-301000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:25:29.173361    3723 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:25:29.173532    3723 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:25:29.173538    3723 out.go:358] Setting ErrFile to fd 2...
	I1001 12:25:29.173542    3723 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:25:29.173714    3723 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:25:29.173874    3723 out.go:352] Setting JSON to false
	I1001 12:25:29.173888    3723 mustload.go:65] Loading cluster: multinode-301000
	I1001 12:25:29.173935    3723 notify.go:220] Checking for updates...
	I1001 12:25:29.174167    3723 config.go:182] Loaded profile config "multinode-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:25:29.174182    3723 status.go:174] checking status of multinode-301000 ...
	I1001 12:25:29.174491    3723 status.go:371] multinode-301000 host status = "Stopped" (err=<nil>)
	I1001 12:25:29.174496    3723 status.go:384] host is not running, skipping remaining checks
	I1001 12:25:29.174498    3723 status.go:176] multinode-301000 status: &{Name:multinode-301000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1001 12:25:29.175493    1595 retry.go:31] will retry after 9.149067766s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-301000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-301000 status -v=7 --alsologtostderr: exit status 7 (72.702792ms)

                                                
                                                
-- stdout --
	multinode-301000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:25:38.397222    3725 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:25:38.397480    3725 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:25:38.397484    3725 out.go:358] Setting ErrFile to fd 2...
	I1001 12:25:38.397487    3725 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:25:38.397651    3725 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:25:38.397801    3725 out.go:352] Setting JSON to false
	I1001 12:25:38.397815    3725 mustload.go:65] Loading cluster: multinode-301000
	I1001 12:25:38.397862    3725 notify.go:220] Checking for updates...
	I1001 12:25:38.398127    3725 config.go:182] Loaded profile config "multinode-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:25:38.398140    3725 status.go:174] checking status of multinode-301000 ...
	I1001 12:25:38.398444    3725 status.go:371] multinode-301000 host status = "Stopped" (err=<nil>)
	I1001 12:25:38.398449    3725 status.go:384] host is not running, skipping remaining checks
	I1001 12:25:38.398452    3725 status.go:176] multinode-301000 status: &{Name:multinode-301000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1001 12:25:38.399441    1595 retry.go:31] will retry after 10.18399301s: exit status 7
E1001 12:25:42.759895    1595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/addons-075000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-301000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-301000 status -v=7 --alsologtostderr: exit status 7 (73.729708ms)

                                                
                                                
-- stdout --
	multinode-301000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:25:48.657266    3730 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:25:48.657458    3730 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:25:48.657463    3730 out.go:358] Setting ErrFile to fd 2...
	I1001 12:25:48.657466    3730 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:25:48.657642    3730 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:25:48.657810    3730 out.go:352] Setting JSON to false
	I1001 12:25:48.657823    3730 mustload.go:65] Loading cluster: multinode-301000
	I1001 12:25:48.657866    3730 notify.go:220] Checking for updates...
	I1001 12:25:48.658098    3730 config.go:182] Loaded profile config "multinode-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:25:48.658110    3730 status.go:174] checking status of multinode-301000 ...
	I1001 12:25:48.658413    3730 status.go:371] multinode-301000 host status = "Stopped" (err=<nil>)
	I1001 12:25:48.658418    3730 status.go:384] host is not running, skipping remaining checks
	I1001 12:25:48.658421    3730 status.go:176] multinode-301000 status: &{Name:multinode-301000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1001 12:25:48.659403    1595 retry.go:31] will retry after 11.775054642s: exit status 7
E1001 12:25:52.109412    1595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/functional-755000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-301000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-301000 status -v=7 --alsologtostderr: exit status 7 (73.8685ms)

                                                
                                                
-- stdout --
	multinode-301000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:26:00.508482    3735 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:26:00.508699    3735 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:26:00.508704    3735 out.go:358] Setting ErrFile to fd 2...
	I1001 12:26:00.508707    3735 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:26:00.508872    3735 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:26:00.509037    3735 out.go:352] Setting JSON to false
	I1001 12:26:00.509053    3735 mustload.go:65] Loading cluster: multinode-301000
	I1001 12:26:00.509076    3735 notify.go:220] Checking for updates...
	I1001 12:26:00.509338    3735 config.go:182] Loaded profile config "multinode-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:26:00.509349    3735 status.go:174] checking status of multinode-301000 ...
	I1001 12:26:00.509649    3735 status.go:371] multinode-301000 host status = "Stopped" (err=<nil>)
	I1001 12:26:00.509654    3735 status.go:384] host is not running, skipping remaining checks
	I1001 12:26:00.509657    3735 status.go:176] multinode-301000 status: &{Name:multinode-301000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-301000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-301000 -n multinode-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-301000 -n multinode-301000: exit status 7 (32.349833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (46.28s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (7.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-301000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-301000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-301000: (1.851832375s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-301000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-301000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.231738334s)

                                                
                                                
-- stdout --
	* [multinode-301000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-301000" primary control-plane node in "multinode-301000" cluster
	* Restarting existing qemu2 VM for "multinode-301000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-301000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:26:02.482541    3753 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:26:02.482714    3753 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:26:02.482719    3753 out.go:358] Setting ErrFile to fd 2...
	I1001 12:26:02.482723    3753 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:26:02.482896    3753 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:26:02.484120    3753 out.go:352] Setting JSON to false
	I1001 12:26:02.503046    3753 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3327,"bootTime":1727807435,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1001 12:26:02.503116    3753 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 12:26:02.508101    3753 out.go:177] * [multinode-301000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 12:26:02.515016    3753 notify.go:220] Checking for updates...
	I1001 12:26:02.529131    3753 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 12:26:02.532148    3753 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	I1001 12:26:02.535155    3753 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 12:26:02.538055    3753 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 12:26:02.541095    3753 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	I1001 12:26:02.544113    3753 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 12:26:02.547391    3753 config.go:182] Loaded profile config "multinode-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:26:02.547449    3753 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 12:26:02.552071    3753 out.go:177] * Using the qemu2 driver based on existing profile
	I1001 12:26:02.559038    3753 start.go:297] selected driver: qemu2
	I1001 12:26:02.559046    3753 start.go:901] validating driver "qemu2" against &{Name:multinode-301000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:multinode-301000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 12:26:02.559111    3753 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 12:26:02.561564    3753 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 12:26:02.561592    3753 cni.go:84] Creating CNI manager for ""
	I1001 12:26:02.561625    3753 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1001 12:26:02.561681    3753 start.go:340] cluster config:
	{Name:multinode-301000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-301000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 12:26:02.565896    3753 iso.go:125] acquiring lock: {Name:mk749d3a5db31c259cbd6465e91cf5073e7cc750 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:26:02.573049    3753 out.go:177] * Starting "multinode-301000" primary control-plane node in "multinode-301000" cluster
	I1001 12:26:02.577048    3753 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 12:26:02.577071    3753 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1001 12:26:02.577086    3753 cache.go:56] Caching tarball of preloaded images
	I1001 12:26:02.577154    3753 preload.go:172] Found /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 12:26:02.577167    3753 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1001 12:26:02.577228    3753 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/multinode-301000/config.json ...
	I1001 12:26:02.577783    3753 start.go:360] acquireMachinesLock for multinode-301000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:26:02.577833    3753 start.go:364] duration metric: took 43.209µs to acquireMachinesLock for "multinode-301000"
	I1001 12:26:02.577843    3753 start.go:96] Skipping create...Using existing machine configuration
	I1001 12:26:02.577847    3753 fix.go:54] fixHost starting: 
	I1001 12:26:02.577978    3753 fix.go:112] recreateIfNeeded on multinode-301000: state=Stopped err=<nil>
	W1001 12:26:02.577989    3753 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 12:26:02.583055    3753 out.go:177] * Restarting existing qemu2 VM for "multinode-301000" ...
	I1001 12:26:02.591086    3753 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:26:02.591141    3753 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/multinode-301000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/multinode-301000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/multinode-301000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:e8:d5:20:ed:24 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/multinode-301000/disk.qcow2
	I1001 12:26:02.593417    3753 main.go:141] libmachine: STDOUT: 
	I1001 12:26:02.593442    3753 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:26:02.593493    3753 fix.go:56] duration metric: took 15.626625ms for fixHost
	I1001 12:26:02.593497    3753 start.go:83] releasing machines lock for "multinode-301000", held for 15.659209ms
	W1001 12:26:02.593505    3753 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 12:26:02.593535    3753 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:26:02.593540    3753 start.go:729] Will try again in 5 seconds ...
	I1001 12:26:07.595607    3753 start.go:360] acquireMachinesLock for multinode-301000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:26:07.596029    3753 start.go:364] duration metric: took 337.167µs to acquireMachinesLock for "multinode-301000"
	I1001 12:26:07.596163    3753 start.go:96] Skipping create...Using existing machine configuration
	I1001 12:26:07.596182    3753 fix.go:54] fixHost starting: 
	I1001 12:26:07.596831    3753 fix.go:112] recreateIfNeeded on multinode-301000: state=Stopped err=<nil>
	W1001 12:26:07.596856    3753 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 12:26:07.605443    3753 out.go:177] * Restarting existing qemu2 VM for "multinode-301000" ...
	I1001 12:26:07.610366    3753 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:26:07.610554    3753 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/multinode-301000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/multinode-301000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/multinode-301000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:e8:d5:20:ed:24 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/multinode-301000/disk.qcow2
	I1001 12:26:07.620409    3753 main.go:141] libmachine: STDOUT: 
	I1001 12:26:07.620473    3753 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:26:07.620579    3753 fix.go:56] duration metric: took 24.396375ms for fixHost
	I1001 12:26:07.620597    3753 start.go:83] releasing machines lock for "multinode-301000", held for 24.544209ms
	W1001 12:26:07.620810    3753 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-301000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-301000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:26:07.627329    3753 out.go:201] 
	W1001 12:26:07.631454    3753 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 12:26:07.631488    3753 out.go:270] * 
	* 
	W1001 12:26:07.633952    3753 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 12:26:07.643294    3753 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-301000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-301000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-301000 -n multinode-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-301000 -n multinode-301000: exit status 7 (32.751959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (7.21s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-301000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-301000 node delete m03: exit status 83 (39.682625ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-301000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-301000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-301000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-301000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-301000 status --alsologtostderr: exit status 7 (29.470042ms)

                                                
                                                
-- stdout --
	multinode-301000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:26:07.825698    3767 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:26:07.825852    3767 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:26:07.825855    3767 out.go:358] Setting ErrFile to fd 2...
	I1001 12:26:07.825857    3767 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:26:07.825981    3767 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:26:07.826097    3767 out.go:352] Setting JSON to false
	I1001 12:26:07.826107    3767 mustload.go:65] Loading cluster: multinode-301000
	I1001 12:26:07.826168    3767 notify.go:220] Checking for updates...
	I1001 12:26:07.826316    3767 config.go:182] Loaded profile config "multinode-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:26:07.826326    3767 status.go:174] checking status of multinode-301000 ...
	I1001 12:26:07.826546    3767 status.go:371] multinode-301000 host status = "Stopped" (err=<nil>)
	I1001 12:26:07.826551    3767 status.go:384] host is not running, skipping remaining checks
	I1001 12:26:07.826553    3767 status.go:176] multinode-301000 status: &{Name:multinode-301000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-301000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-301000 -n multinode-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-301000 -n multinode-301000: exit status 7 (29.702125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (2.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-301000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-301000 stop: (2.11071225s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-301000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-301000 status: exit status 7 (62.295833ms)

                                                
                                                
-- stdout --
	multinode-301000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-301000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-301000 status --alsologtostderr: exit status 7 (32.051666ms)

                                                
                                                
-- stdout --
	multinode-301000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:26:10.060970    3785 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:26:10.061118    3785 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:26:10.061121    3785 out.go:358] Setting ErrFile to fd 2...
	I1001 12:26:10.061123    3785 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:26:10.061256    3785 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:26:10.061387    3785 out.go:352] Setting JSON to false
	I1001 12:26:10.061398    3785 mustload.go:65] Loading cluster: multinode-301000
	I1001 12:26:10.061461    3785 notify.go:220] Checking for updates...
	I1001 12:26:10.061633    3785 config.go:182] Loaded profile config "multinode-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:26:10.061643    3785 status.go:174] checking status of multinode-301000 ...
	I1001 12:26:10.061883    3785 status.go:371] multinode-301000 host status = "Stopped" (err=<nil>)
	I1001 12:26:10.061886    3785 status.go:384] host is not running, skipping remaining checks
	I1001 12:26:10.061891    3785 status.go:176] multinode-301000 status: &{Name:multinode-301000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-301000 status --alsologtostderr": multinode-301000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-301000 status --alsologtostderr": multinode-301000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-301000 -n multinode-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-301000 -n multinode-301000: exit status 7 (29.7605ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (2.24s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-301000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-301000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.180439792s)

                                                
                                                
-- stdout --
	* [multinode-301000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-301000" primary control-plane node in "multinode-301000" cluster
	* Restarting existing qemu2 VM for "multinode-301000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-301000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:26:10.120246    3789 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:26:10.120370    3789 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:26:10.120373    3789 out.go:358] Setting ErrFile to fd 2...
	I1001 12:26:10.120376    3789 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:26:10.120502    3789 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:26:10.121514    3789 out.go:352] Setting JSON to false
	I1001 12:26:10.137547    3789 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3335,"bootTime":1727807435,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1001 12:26:10.137618    3789 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 12:26:10.141490    3789 out.go:177] * [multinode-301000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 12:26:10.148385    3789 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 12:26:10.148422    3789 notify.go:220] Checking for updates...
	I1001 12:26:10.156404    3789 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	I1001 12:26:10.160363    3789 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 12:26:10.163411    3789 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 12:26:10.166377    3789 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	I1001 12:26:10.169325    3789 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 12:26:10.172676    3789 config.go:182] Loaded profile config "multinode-301000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:26:10.172943    3789 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 12:26:10.177317    3789 out.go:177] * Using the qemu2 driver based on existing profile
	I1001 12:26:10.184324    3789 start.go:297] selected driver: qemu2
	I1001 12:26:10.184330    3789 start.go:901] validating driver "qemu2" against &{Name:multinode-301000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:multinode-301000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 12:26:10.184395    3789 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 12:26:10.186654    3789 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 12:26:10.186677    3789 cni.go:84] Creating CNI manager for ""
	I1001 12:26:10.186696    3789 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1001 12:26:10.186737    3789 start.go:340] cluster config:
	{Name:multinode-301000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-301000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 12:26:10.190246    3789 iso.go:125] acquiring lock: {Name:mk749d3a5db31c259cbd6465e91cf5073e7cc750 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:26:10.195320    3789 out.go:177] * Starting "multinode-301000" primary control-plane node in "multinode-301000" cluster
	I1001 12:26:10.199315    3789 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 12:26:10.199330    3789 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1001 12:26:10.199338    3789 cache.go:56] Caching tarball of preloaded images
	I1001 12:26:10.199389    3789 preload.go:172] Found /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 12:26:10.199395    3789 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1001 12:26:10.199463    3789 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/multinode-301000/config.json ...
	I1001 12:26:10.199938    3789 start.go:360] acquireMachinesLock for multinode-301000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:26:10.199970    3789 start.go:364] duration metric: took 25.75µs to acquireMachinesLock for "multinode-301000"
	I1001 12:26:10.199978    3789 start.go:96] Skipping create...Using existing machine configuration
	I1001 12:26:10.199983    3789 fix.go:54] fixHost starting: 
	I1001 12:26:10.200120    3789 fix.go:112] recreateIfNeeded on multinode-301000: state=Stopped err=<nil>
	W1001 12:26:10.200130    3789 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 12:26:10.208323    3789 out.go:177] * Restarting existing qemu2 VM for "multinode-301000" ...
	I1001 12:26:10.212208    3789 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:26:10.212251    3789 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/multinode-301000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/multinode-301000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/multinode-301000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:e8:d5:20:ed:24 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/multinode-301000/disk.qcow2
	I1001 12:26:10.214289    3789 main.go:141] libmachine: STDOUT: 
	I1001 12:26:10.214310    3789 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:26:10.214340    3789 fix.go:56] duration metric: took 14.35575ms for fixHost
	I1001 12:26:10.214346    3789 start.go:83] releasing machines lock for "multinode-301000", held for 14.3715ms
	W1001 12:26:10.214353    3789 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 12:26:10.214397    3789 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:26:10.214402    3789 start.go:729] Will try again in 5 seconds ...
	I1001 12:26:15.216570    3789 start.go:360] acquireMachinesLock for multinode-301000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:26:15.216926    3789 start.go:364] duration metric: took 256.416µs to acquireMachinesLock for "multinode-301000"
	I1001 12:26:15.217052    3789 start.go:96] Skipping create...Using existing machine configuration
	I1001 12:26:15.217074    3789 fix.go:54] fixHost starting: 
	I1001 12:26:15.217760    3789 fix.go:112] recreateIfNeeded on multinode-301000: state=Stopped err=<nil>
	W1001 12:26:15.217785    3789 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 12:26:15.222167    3789 out.go:177] * Restarting existing qemu2 VM for "multinode-301000" ...
	I1001 12:26:15.229130    3789 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:26:15.229366    3789 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/multinode-301000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/multinode-301000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/multinode-301000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:e8:d5:20:ed:24 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/multinode-301000/disk.qcow2
	I1001 12:26:15.238497    3789 main.go:141] libmachine: STDOUT: 
	I1001 12:26:15.238603    3789 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:26:15.238734    3789 fix.go:56] duration metric: took 21.659125ms for fixHost
	I1001 12:26:15.238758    3789 start.go:83] releasing machines lock for "multinode-301000", held for 21.803291ms
	W1001 12:26:15.239013    3789 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-301000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-301000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:26:15.246152    3789 out.go:201] 
	W1001 12:26:15.249166    3789 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 12:26:15.249199    3789 out.go:270] * 
	* 
	W1001 12:26:15.251510    3789 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 12:26:15.260135    3789 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-301000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-301000 -n multinode-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-301000 -n multinode-301000: exit status 7 (70.65475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-301000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-301000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-301000-m01 --driver=qemu2 : exit status 80 (9.912531917s)

                                                
                                                
-- stdout --
	* [multinode-301000-m01] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-301000-m01" primary control-plane node in "multinode-301000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-301000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-301000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-301000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-301000-m02 --driver=qemu2 : exit status 80 (9.877454917s)

                                                
                                                
-- stdout --
	* [multinode-301000-m02] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-301000-m02" primary control-plane node in "multinode-301000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-301000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-301000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-301000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-301000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-301000: exit status 83 (82.497875ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-301000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-301000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-301000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-301000 -n multinode-301000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-301000 -n multinode-301000: exit status 7 (29.533625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-301000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.02s)

                                                
                                    
x
+
TestPreload (10.19s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-400000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-400000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (10.039172917s)

                                                
                                                
-- stdout --
	* [test-preload-400000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-400000" primary control-plane node in "test-preload-400000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-400000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:26:35.506656    3846 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:26:35.506782    3846 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:26:35.506785    3846 out.go:358] Setting ErrFile to fd 2...
	I1001 12:26:35.506788    3846 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:26:35.506919    3846 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:26:35.507968    3846 out.go:352] Setting JSON to false
	I1001 12:26:35.524072    3846 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3360,"bootTime":1727807435,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1001 12:26:35.524138    3846 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 12:26:35.531605    3846 out.go:177] * [test-preload-400000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 12:26:35.539490    3846 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 12:26:35.539539    3846 notify.go:220] Checking for updates...
	I1001 12:26:35.545493    3846 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	I1001 12:26:35.548522    3846 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 12:26:35.551509    3846 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 12:26:35.554460    3846 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	I1001 12:26:35.557465    3846 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 12:26:35.560783    3846 config.go:182] Loaded profile config "multinode-301000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:26:35.560828    3846 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 12:26:35.565388    3846 out.go:177] * Using the qemu2 driver based on user configuration
	I1001 12:26:35.572495    3846 start.go:297] selected driver: qemu2
	I1001 12:26:35.572501    3846 start.go:901] validating driver "qemu2" against <nil>
	I1001 12:26:35.572507    3846 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 12:26:35.574616    3846 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 12:26:35.578490    3846 out.go:177] * Automatically selected the socket_vmnet network
	I1001 12:26:35.581577    3846 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 12:26:35.581593    3846 cni.go:84] Creating CNI manager for ""
	I1001 12:26:35.581613    3846 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 12:26:35.581617    3846 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1001 12:26:35.581651    3846 start.go:340] cluster config:
	{Name:test-preload-400000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-400000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/s
ocket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 12:26:35.585202    3846 iso.go:125] acquiring lock: {Name:mk749d3a5db31c259cbd6465e91cf5073e7cc750 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:26:35.591431    3846 out.go:177] * Starting "test-preload-400000" primary control-plane node in "test-preload-400000" cluster
	I1001 12:26:35.595423    3846 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I1001 12:26:35.595497    3846 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/test-preload-400000/config.json ...
	I1001 12:26:35.595514    3846 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/test-preload-400000/config.json: {Name:mkd83198448fb4b1737a03af5c326cca3b04b5eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 12:26:35.595536    3846 cache.go:107] acquiring lock: {Name:mk6c1930d14b46ca06bda2cab6fa5b0fecacbe45 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:26:35.595535    3846 cache.go:107] acquiring lock: {Name:mkb32d66446dc6ccc22c2438745b403560e167da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:26:35.595537    3846 cache.go:107] acquiring lock: {Name:mk3b307558ad851f6ec70cfca7e1b2f433171d3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:26:35.595609    3846 cache.go:107] acquiring lock: {Name:mkba6f5e76f1e5357cb204fa650b5cfedd5ad9b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:26:35.595753    3846 cache.go:107] acquiring lock: {Name:mk4aa8c7f640c80fc5974cf0ea512bdc638b66da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:26:35.595771    3846 cache.go:107] acquiring lock: {Name:mk5a01296406d30b5270390fdc76418651a6b049 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:26:35.595781    3846 cache.go:107] acquiring lock: {Name:mkef2d827bd5e8970f81fce396daab61ed7d33a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:26:35.595804    3846 cache.go:107] acquiring lock: {Name:mkb95633054f4f0d9a1e7a0cf807f2740b2be431 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:26:35.595929    3846 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1001 12:26:35.595935    3846 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1001 12:26:35.595961    3846 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1001 12:26:35.595978    3846 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 12:26:35.596059    3846 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1001 12:26:35.596130    3846 start.go:360] acquireMachinesLock for test-preload-400000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:26:35.596172    3846 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1001 12:26:35.596176    3846 start.go:364] duration metric: took 33.709µs to acquireMachinesLock for "test-preload-400000"
	I1001 12:26:35.596190    3846 start.go:93] Provisioning new machine with config: &{Name:test-preload-400000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.24.4 ClusterName:test-preload-400000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 12:26:35.596221    3846 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 12:26:35.596231    3846 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1001 12:26:35.596222    3846 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1001 12:26:35.600455    3846 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 12:26:35.608777    3846 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1001 12:26:35.609440    3846 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 12:26:35.609529    3846 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1001 12:26:35.609571    3846 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1001 12:26:35.611147    3846 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1001 12:26:35.611433    3846 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1001 12:26:35.611818    3846 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1001 12:26:35.611938    3846 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1001 12:26:35.619371    3846 start.go:159] libmachine.API.Create for "test-preload-400000" (driver="qemu2")
	I1001 12:26:35.619394    3846 client.go:168] LocalClient.Create starting
	I1001 12:26:35.619481    3846 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem
	I1001 12:26:35.619516    3846 main.go:141] libmachine: Decoding PEM data...
	I1001 12:26:35.619526    3846 main.go:141] libmachine: Parsing certificate...
	I1001 12:26:35.619575    3846 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/cert.pem
	I1001 12:26:35.619598    3846 main.go:141] libmachine: Decoding PEM data...
	I1001 12:26:35.619610    3846 main.go:141] libmachine: Parsing certificate...
	I1001 12:26:35.619974    3846 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19736-1073/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 12:26:35.780394    3846 main.go:141] libmachine: Creating SSH key...
	I1001 12:26:35.933689    3846 main.go:141] libmachine: Creating Disk image...
	I1001 12:26:35.933715    3846 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 12:26:35.933915    3846 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/test-preload-400000/disk.qcow2.raw /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/test-preload-400000/disk.qcow2
	I1001 12:26:35.943773    3846 main.go:141] libmachine: STDOUT: 
	I1001 12:26:35.943802    3846 main.go:141] libmachine: STDERR: 
	I1001 12:26:35.943860    3846 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/test-preload-400000/disk.qcow2 +20000M
	I1001 12:26:35.952624    3846 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 12:26:35.952643    3846 main.go:141] libmachine: STDERR: 
	I1001 12:26:35.952689    3846 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/test-preload-400000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/test-preload-400000/disk.qcow2
	I1001 12:26:35.952693    3846 main.go:141] libmachine: Starting QEMU VM...
	I1001 12:26:35.952710    3846 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:26:35.952755    3846 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/test-preload-400000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/test-preload-400000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/test-preload-400000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:5a:38:da:1b:cb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/test-preload-400000/disk.qcow2
	I1001 12:26:35.954489    3846 main.go:141] libmachine: STDOUT: 
	I1001 12:26:35.954504    3846 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:26:35.954525    3846 client.go:171] duration metric: took 335.131958ms to LocalClient.Create
	I1001 12:26:37.672716    3846 cache.go:162] opening:  /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	W1001 12:26:37.745647    3846 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1001 12:26:37.745739    3846 cache.go:162] opening:  /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1001 12:26:37.786855    3846 cache.go:162] opening:  /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I1001 12:26:37.800270    3846 cache.go:162] opening:  /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1001 12:26:37.954985    3846 start.go:128] duration metric: took 2.358794167s to createHost
	I1001 12:26:37.955019    3846 start.go:83] releasing machines lock for "test-preload-400000", held for 2.358882083s
	W1001 12:26:37.955081    3846 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:26:37.974866    3846 out.go:177] * Deleting "test-preload-400000" in qemu2 ...
	I1001 12:26:37.979982    3846 cache.go:157] /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I1001 12:26:37.980019    3846 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 2.384483875s
	I1001 12:26:37.980054    3846 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W1001 12:26:38.011499    3846 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:26:38.011526    3846 start.go:729] Will try again in 5 seconds ...
	W1001 12:26:38.036703    3846 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1001 12:26:38.036773    3846 cache.go:162] opening:  /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1001 12:26:38.307109    3846 cache.go:162] opening:  /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I1001 12:26:38.364182    3846 cache.go:162] opening:  /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1001 12:26:38.367899    3846 cache.go:162] opening:  /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I1001 12:26:39.480715    3846 cache.go:157] /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I1001 12:26:39.480780    3846 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.885096583s
	I1001 12:26:39.480816    3846 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I1001 12:26:40.088307    3846 cache.go:157] /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1001 12:26:40.088356    3846 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 4.49291125s
	I1001 12:26:40.088379    3846 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1001 12:26:40.108663    3846 cache.go:157] /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I1001 12:26:40.108704    3846 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 4.513096208s
	I1001 12:26:40.108744    3846 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I1001 12:26:42.224037    3846 cache.go:157] /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I1001 12:26:42.224096    3846 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 6.628697625s
	I1001 12:26:42.224123    3846 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I1001 12:26:42.386948    3846 cache.go:157] /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I1001 12:26:42.387046    3846 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 6.791655875s
	I1001 12:26:42.387096    3846 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I1001 12:26:42.814790    3846 cache.go:157] /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I1001 12:26:42.814853    3846 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 7.219197792s
	I1001 12:26:42.814885    3846 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I1001 12:26:43.012643    3846 start.go:360] acquireMachinesLock for test-preload-400000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:26:43.013048    3846 start.go:364] duration metric: took 332.125µs to acquireMachinesLock for "test-preload-400000"
	I1001 12:26:43.013143    3846 start.go:93] Provisioning new machine with config: &{Name:test-preload-400000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.24.4 ClusterName:test-preload-400000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 12:26:43.013404    3846 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 12:26:43.024869    3846 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 12:26:43.075831    3846 start.go:159] libmachine.API.Create for "test-preload-400000" (driver="qemu2")
	I1001 12:26:43.075869    3846 client.go:168] LocalClient.Create starting
	I1001 12:26:43.076000    3846 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem
	I1001 12:26:43.076076    3846 main.go:141] libmachine: Decoding PEM data...
	I1001 12:26:43.076097    3846 main.go:141] libmachine: Parsing certificate...
	I1001 12:26:43.076159    3846 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/cert.pem
	I1001 12:26:43.076208    3846 main.go:141] libmachine: Decoding PEM data...
	I1001 12:26:43.076224    3846 main.go:141] libmachine: Parsing certificate...
	I1001 12:26:43.076757    3846 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19736-1073/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 12:26:43.275556    3846 main.go:141] libmachine: Creating SSH key...
	I1001 12:26:43.444635    3846 main.go:141] libmachine: Creating Disk image...
	I1001 12:26:43.444642    3846 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 12:26:43.444821    3846 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/test-preload-400000/disk.qcow2.raw /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/test-preload-400000/disk.qcow2
	I1001 12:26:43.454267    3846 main.go:141] libmachine: STDOUT: 
	I1001 12:26:43.454285    3846 main.go:141] libmachine: STDERR: 
	I1001 12:26:43.454337    3846 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/test-preload-400000/disk.qcow2 +20000M
	I1001 12:26:43.462297    3846 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 12:26:43.462318    3846 main.go:141] libmachine: STDERR: 
	I1001 12:26:43.462328    3846 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/test-preload-400000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/test-preload-400000/disk.qcow2
	I1001 12:26:43.462333    3846 main.go:141] libmachine: Starting QEMU VM...
	I1001 12:26:43.462349    3846 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:26:43.462395    3846 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/test-preload-400000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/test-preload-400000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/test-preload-400000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:94:ee:2d:16:d8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/test-preload-400000/disk.qcow2
	I1001 12:26:43.464149    3846 main.go:141] libmachine: STDOUT: 
	I1001 12:26:43.464164    3846 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:26:43.464178    3846 client.go:171] duration metric: took 388.308083ms to LocalClient.Create
	I1001 12:26:45.464466    3846 start.go:128] duration metric: took 2.451067959s to createHost
	I1001 12:26:45.464523    3846 start.go:83] releasing machines lock for "test-preload-400000", held for 2.451502333s
	W1001 12:26:45.464917    3846 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-400000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-400000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:26:45.483072    3846 out.go:201] 
	W1001 12:26:45.487095    3846 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 12:26:45.487136    3846 out.go:270] * 
	* 
	W1001 12:26:45.489414    3846 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 12:26:45.503852    3846 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-400000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:629: *** TestPreload FAILED at 2024-10-01 12:26:45.520365 -0700 PDT m=+2436.360988376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-400000 -n test-preload-400000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-400000 -n test-preload-400000: exit status 7 (65.67875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-400000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-400000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-400000
--- FAIL: TestPreload (10.19s)

                                                
                                    
x
+
TestScheduledStopUnix (10.3s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-954000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-954000 --memory=2048 --driver=qemu2 : exit status 80 (10.14324375s)

                                                
                                                
-- stdout --
	* [scheduled-stop-954000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-954000" primary control-plane node in "scheduled-stop-954000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-954000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-954000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-954000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-954000" primary control-plane node in "scheduled-stop-954000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-954000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-954000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestScheduledStopUnix FAILED at 2024-10-01 12:26:55.812709 -0700 PDT m=+2446.653545043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-954000 -n scheduled-stop-954000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-954000 -n scheduled-stop-954000: exit status 7 (68.148625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-954000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-954000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-954000
--- FAIL: TestScheduledStopUnix (10.30s)

                                                
                                    
x
+
TestSkaffold (16.59s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe3868100636 version
skaffold_test.go:59: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe3868100636 version: (1.068579208s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-909000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-909000 --memory=2600 --driver=qemu2 : exit status 80 (9.860307791s)

                                                
                                                
-- stdout --
	* [skaffold-909000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-909000" primary control-plane node in "skaffold-909000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-909000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-909000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-909000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-909000" primary control-plane node in "skaffold-909000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-909000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-909000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestSkaffold FAILED at 2024-10-01 12:27:12.408073 -0700 PDT m=+2463.249251668
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-909000 -n skaffold-909000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-909000 -n skaffold-909000: exit status 7 (60.8215ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-909000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-909000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-909000
--- FAIL: TestSkaffold (16.59s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (621.75s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.708140537 start -p running-upgrade-810000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.708140537 start -p running-upgrade-810000 --memory=2200 --vm-driver=qemu2 : (1m20.657984833s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-810000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E1001 12:30:42.753844    1595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/addons-075000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-810000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m22.903913083s)

                                                
                                                
-- stdout --
	* [running-upgrade-810000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-810000" primary control-plane node in "running-upgrade-810000" cluster
	* Updating the running qemu2 "running-upgrade-810000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:29:19.734959    4242 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:29:19.735091    4242 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:29:19.735095    4242 out.go:358] Setting ErrFile to fd 2...
	I1001 12:29:19.735097    4242 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:29:19.735210    4242 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:29:19.736256    4242 out.go:352] Setting JSON to false
	I1001 12:29:19.752430    4242 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3524,"bootTime":1727807435,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1001 12:29:19.752505    4242 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 12:29:19.757142    4242 out.go:177] * [running-upgrade-810000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 12:29:19.763924    4242 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 12:29:19.763961    4242 notify.go:220] Checking for updates...
	I1001 12:29:19.772050    4242 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	I1001 12:29:19.776021    4242 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 12:29:19.779107    4242 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 12:29:19.782156    4242 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	I1001 12:29:19.785105    4242 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 12:29:19.788416    4242 config.go:182] Loaded profile config "running-upgrade-810000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1001 12:29:19.792104    4242 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1001 12:29:19.795014    4242 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 12:29:19.799104    4242 out.go:177] * Using the qemu2 driver based on existing profile
	I1001 12:29:19.804971    4242 start.go:297] selected driver: qemu2
	I1001 12:29:19.804977    4242 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-810000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50292 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-810000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1001 12:29:19.805029    4242 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 12:29:19.807212    4242 cni.go:84] Creating CNI manager for ""
	I1001 12:29:19.807247    4242 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 12:29:19.807270    4242 start.go:340] cluster config:
	{Name:running-upgrade-810000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50292 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-810000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1001 12:29:19.807327    4242 iso.go:125] acquiring lock: {Name:mk749d3a5db31c259cbd6465e91cf5073e7cc750 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:29:19.815071    4242 out.go:177] * Starting "running-upgrade-810000" primary control-plane node in "running-upgrade-810000" cluster
	I1001 12:29:19.819003    4242 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1001 12:29:19.819016    4242 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1001 12:29:19.819021    4242 cache.go:56] Caching tarball of preloaded images
	I1001 12:29:19.819077    4242 preload.go:172] Found /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 12:29:19.819086    4242 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1001 12:29:19.819133    4242 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/running-upgrade-810000/config.json ...
	I1001 12:29:19.819657    4242 start.go:360] acquireMachinesLock for running-upgrade-810000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:29:19.819695    4242 start.go:364] duration metric: took 31.333µs to acquireMachinesLock for "running-upgrade-810000"
	I1001 12:29:19.819704    4242 start.go:96] Skipping create...Using existing machine configuration
	I1001 12:29:19.819709    4242 fix.go:54] fixHost starting: 
	I1001 12:29:19.820343    4242 fix.go:112] recreateIfNeeded on running-upgrade-810000: state=Running err=<nil>
	W1001 12:29:19.820350    4242 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 12:29:19.824149    4242 out.go:177] * Updating the running qemu2 "running-upgrade-810000" VM ...
	I1001 12:29:19.832100    4242 machine.go:93] provisionDockerMachine start ...
	I1001 12:29:19.832142    4242 main.go:141] libmachine: Using SSH client type: native
	I1001 12:29:19.832248    4242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102979c00] 0x10297c440 <nil>  [] 0s} localhost 50260 <nil> <nil>}
	I1001 12:29:19.832256    4242 main.go:141] libmachine: About to run SSH command:
	hostname
	I1001 12:29:19.891730    4242 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-810000
	
	I1001 12:29:19.891745    4242 buildroot.go:166] provisioning hostname "running-upgrade-810000"
	I1001 12:29:19.891795    4242 main.go:141] libmachine: Using SSH client type: native
	I1001 12:29:19.891913    4242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102979c00] 0x10297c440 <nil>  [] 0s} localhost 50260 <nil> <nil>}
	I1001 12:29:19.891919    4242 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-810000 && echo "running-upgrade-810000" | sudo tee /etc/hostname
	I1001 12:29:19.952722    4242 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-810000
	
	I1001 12:29:19.952776    4242 main.go:141] libmachine: Using SSH client type: native
	I1001 12:29:19.952892    4242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102979c00] 0x10297c440 <nil>  [] 0s} localhost 50260 <nil> <nil>}
	I1001 12:29:19.952903    4242 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-810000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-810000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-810000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 12:29:20.012870    4242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 12:29:20.012881    4242 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19736-1073/.minikube CaCertPath:/Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19736-1073/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19736-1073/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19736-1073/.minikube}
	I1001 12:29:20.012893    4242 buildroot.go:174] setting up certificates
	I1001 12:29:20.012897    4242 provision.go:84] configureAuth start
	I1001 12:29:20.012901    4242 provision.go:143] copyHostCerts
	I1001 12:29:20.012972    4242 exec_runner.go:144] found /Users/jenkins/minikube-integration/19736-1073/.minikube/ca.pem, removing ...
	I1001 12:29:20.012977    4242 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19736-1073/.minikube/ca.pem
	I1001 12:29:20.013101    4242 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19736-1073/.minikube/ca.pem (1078 bytes)
	I1001 12:29:20.013289    4242 exec_runner.go:144] found /Users/jenkins/minikube-integration/19736-1073/.minikube/cert.pem, removing ...
	I1001 12:29:20.013293    4242 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19736-1073/.minikube/cert.pem
	I1001 12:29:20.013341    4242 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19736-1073/.minikube/cert.pem (1123 bytes)
	I1001 12:29:20.013454    4242 exec_runner.go:144] found /Users/jenkins/minikube-integration/19736-1073/.minikube/key.pem, removing ...
	I1001 12:29:20.013457    4242 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19736-1073/.minikube/key.pem
	I1001 12:29:20.013502    4242 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19736-1073/.minikube/key.pem (1675 bytes)
	I1001 12:29:20.013594    4242 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-810000 san=[127.0.0.1 localhost minikube running-upgrade-810000]
	I1001 12:29:20.118774    4242 provision.go:177] copyRemoteCerts
	I1001 12:29:20.118820    4242 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 12:29:20.118829    4242 sshutil.go:53] new ssh client: &{IP:localhost Port:50260 SSHKeyPath:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/running-upgrade-810000/id_rsa Username:docker}
	I1001 12:29:20.150592    4242 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1001 12:29:20.157611    4242 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1001 12:29:20.164241    4242 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1001 12:29:20.170963    4242 provision.go:87] duration metric: took 158.060291ms to configureAuth
	I1001 12:29:20.170972    4242 buildroot.go:189] setting minikube options for container-runtime
	I1001 12:29:20.171084    4242 config.go:182] Loaded profile config "running-upgrade-810000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1001 12:29:20.171132    4242 main.go:141] libmachine: Using SSH client type: native
	I1001 12:29:20.171240    4242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102979c00] 0x10297c440 <nil>  [] 0s} localhost 50260 <nil> <nil>}
	I1001 12:29:20.171245    4242 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1001 12:29:20.227118    4242 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1001 12:29:20.227126    4242 buildroot.go:70] root file system type: tmpfs
	I1001 12:29:20.227182    4242 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1001 12:29:20.227234    4242 main.go:141] libmachine: Using SSH client type: native
	I1001 12:29:20.227338    4242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102979c00] 0x10297c440 <nil>  [] 0s} localhost 50260 <nil> <nil>}
	I1001 12:29:20.227370    4242 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1001 12:29:20.288587    4242 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1001 12:29:20.288647    4242 main.go:141] libmachine: Using SSH client type: native
	I1001 12:29:20.288765    4242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102979c00] 0x10297c440 <nil>  [] 0s} localhost 50260 <nil> <nil>}
	I1001 12:29:20.288773    4242 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1001 12:29:20.350241    4242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 12:29:20.350253    4242 machine.go:96] duration metric: took 518.157792ms to provisionDockerMachine
	I1001 12:29:20.350259    4242 start.go:293] postStartSetup for "running-upgrade-810000" (driver="qemu2")
	I1001 12:29:20.350266    4242 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 12:29:20.350317    4242 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 12:29:20.350326    4242 sshutil.go:53] new ssh client: &{IP:localhost Port:50260 SSHKeyPath:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/running-upgrade-810000/id_rsa Username:docker}
	I1001 12:29:20.382525    4242 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 12:29:20.383717    4242 info.go:137] Remote host: Buildroot 2021.02.12
	I1001 12:29:20.383724    4242 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19736-1073/.minikube/addons for local assets ...
	I1001 12:29:20.383797    4242 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19736-1073/.minikube/files for local assets ...
	I1001 12:29:20.383915    4242 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19736-1073/.minikube/files/etc/ssl/certs/15952.pem -> 15952.pem in /etc/ssl/certs
	I1001 12:29:20.384044    4242 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 12:29:20.387249    4242 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/files/etc/ssl/certs/15952.pem --> /etc/ssl/certs/15952.pem (1708 bytes)
	I1001 12:29:20.393824    4242 start.go:296] duration metric: took 43.561083ms for postStartSetup
	I1001 12:29:20.393844    4242 fix.go:56] duration metric: took 574.143583ms for fixHost
	I1001 12:29:20.393885    4242 main.go:141] libmachine: Using SSH client type: native
	I1001 12:29:20.393980    4242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102979c00] 0x10297c440 <nil>  [] 0s} localhost 50260 <nil> <nil>}
	I1001 12:29:20.393985    4242 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 12:29:20.453099    4242 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727810960.217217974
	
	I1001 12:29:20.453107    4242 fix.go:216] guest clock: 1727810960.217217974
	I1001 12:29:20.453110    4242 fix.go:229] Guest: 2024-10-01 12:29:20.217217974 -0700 PDT Remote: 2024-10-01 12:29:20.393846 -0700 PDT m=+0.677845835 (delta=-176.628026ms)
	I1001 12:29:20.453123    4242 fix.go:200] guest clock delta is within tolerance: -176.628026ms
	I1001 12:29:20.453126    4242 start.go:83] releasing machines lock for "running-upgrade-810000", held for 633.439375ms
	I1001 12:29:20.453185    4242 ssh_runner.go:195] Run: cat /version.json
	I1001 12:29:20.453190    4242 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 12:29:20.453193    4242 sshutil.go:53] new ssh client: &{IP:localhost Port:50260 SSHKeyPath:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/running-upgrade-810000/id_rsa Username:docker}
	I1001 12:29:20.453207    4242 sshutil.go:53] new ssh client: &{IP:localhost Port:50260 SSHKeyPath:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/running-upgrade-810000/id_rsa Username:docker}
	W1001 12:29:20.453754    4242 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50260: connect: connection refused
	I1001 12:29:20.453776    4242 retry.go:31] will retry after 139.606669ms: dial tcp [::1]:50260: connect: connection refused
	W1001 12:29:20.483569    4242 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1001 12:29:20.483625    4242 ssh_runner.go:195] Run: systemctl --version
	I1001 12:29:20.485403    4242 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 12:29:20.487084    4242 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 12:29:20.487108    4242 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1001 12:29:20.489860    4242 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1001 12:29:20.494342    4242 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1001 12:29:20.494353    4242 start.go:495] detecting cgroup driver to use...
	I1001 12:29:20.494422    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 12:29:20.499618    4242 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1001 12:29:20.502919    4242 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1001 12:29:20.506046    4242 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1001 12:29:20.506076    4242 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1001 12:29:20.514060    4242 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1001 12:29:20.516973    4242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1001 12:29:20.520108    4242 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1001 12:29:20.523578    4242 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 12:29:20.527305    4242 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1001 12:29:20.530289    4242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1001 12:29:20.532951    4242 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1001 12:29:20.536134    4242 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 12:29:20.539264    4242 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 12:29:20.541820    4242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 12:29:20.625849    4242 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1001 12:29:20.632167    4242 start.go:495] detecting cgroup driver to use...
	I1001 12:29:20.632252    4242 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1001 12:29:20.640885    4242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 12:29:20.648728    4242 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 12:29:20.691314    4242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 12:29:20.696672    4242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1001 12:29:20.701547    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 12:29:20.706964    4242 ssh_runner.go:195] Run: which cri-dockerd
	I1001 12:29:20.708169    4242 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1001 12:29:20.711191    4242 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1001 12:29:20.716337    4242 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1001 12:29:20.791682    4242 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1001 12:29:20.867311    4242 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1001 12:29:20.867366    4242 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1001 12:29:20.872794    4242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 12:29:20.955448    4242 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1001 12:29:22.678046    4242 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.722617625s)
	I1001 12:29:22.678114    4242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1001 12:29:22.682514    4242 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1001 12:29:22.691744    4242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1001 12:29:22.697176    4242 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1001 12:29:22.772546    4242 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1001 12:29:22.836678    4242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 12:29:22.900677    4242 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1001 12:29:22.907702    4242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1001 12:29:22.912338    4242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 12:29:22.979638    4242 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1001 12:29:23.018964    4242 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1001 12:29:23.019042    4242 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1001 12:29:23.021026    4242 start.go:563] Will wait 60s for crictl version
	I1001 12:29:23.021073    4242 ssh_runner.go:195] Run: which crictl
	I1001 12:29:23.022275    4242 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 12:29:23.034165    4242 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1001 12:29:23.034250    4242 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1001 12:29:23.050095    4242 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1001 12:29:23.070649    4242 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1001 12:29:23.070815    4242 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1001 12:29:23.072124    4242 kubeadm.go:883] updating cluster {Name:running-upgrade-810000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50292 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-810000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1001 12:29:23.072174    4242 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1001 12:29:23.072226    4242 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1001 12:29:23.082944    4242 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1001 12:29:23.082956    4242 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1001 12:29:23.083007    4242 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1001 12:29:23.086235    4242 ssh_runner.go:195] Run: which lz4
	I1001 12:29:23.087479    4242 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1001 12:29:23.088653    4242 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1001 12:29:23.088664    4242 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1001 12:29:24.066719    4242 docker.go:649] duration metric: took 979.294833ms to copy over tarball
	I1001 12:29:24.066786    4242 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1001 12:29:25.193609    4242 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.126832166s)
	I1001 12:29:25.193622    4242 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1001 12:29:25.208980    4242 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1001 12:29:25.212152    4242 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1001 12:29:25.217218    4242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 12:29:25.282502    4242 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1001 12:29:26.669758    4242 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.387267875s)
	I1001 12:29:26.669870    4242 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1001 12:29:26.686153    4242 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1001 12:29:26.686163    4242 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1001 12:29:26.686169    4242 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1001 12:29:26.690468    4242 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 12:29:26.692275    4242 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1001 12:29:26.694400    4242 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1001 12:29:26.694615    4242 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 12:29:26.696275    4242 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1001 12:29:26.696937    4242 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1001 12:29:26.697673    4242 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1001 12:29:26.697692    4242 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1001 12:29:26.699078    4242 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1001 12:29:26.699262    4242 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1001 12:29:26.700873    4242 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1001 12:29:26.700946    4242 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1001 12:29:26.702058    4242 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1001 12:29:26.702217    4242 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1001 12:29:26.703238    4242 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1001 12:29:26.704077    4242 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1001 12:29:28.827068    4242 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1001 12:29:28.839768    4242 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1001 12:29:28.868766    4242 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1001 12:29:28.868826    4242 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1001 12:29:28.868949    4242 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1001 12:29:28.877448    4242 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1001 12:29:28.884108    4242 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1001 12:29:28.884137    4242 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1001 12:29:28.884236    4242 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1001 12:29:28.902002    4242 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1001 12:29:28.910483    4242 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1001 12:29:28.910504    4242 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1001 12:29:28.910576    4242 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1001 12:29:28.913095    4242 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1001 12:29:28.922172    4242 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	W1001 12:29:29.049392    4242 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1001 12:29:29.049604    4242 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 12:29:29.069461    4242 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1001 12:29:29.069497    4242 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 12:29:29.069591    4242 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 12:29:29.085103    4242 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1001 12:29:29.328548    4242 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	W1001 12:29:29.332893    4242 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1001 12:29:29.334439    4242 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1001 12:29:29.344595    4242 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1001 12:29:29.802127    4242 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1001 12:29:29.802228    4242 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1001 12:29:29.802282    4242 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1001 12:29:29.802474    4242 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1001 12:29:29.802595    4242 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1001 12:29:29.802611    4242 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1001 12:29:29.802644    4242 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1001 12:29:29.802716    4242 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1001 12:29:29.802746    4242 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1001 12:29:29.802760    4242 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1001 12:29:29.802777    4242 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1001 12:29:29.802785    4242 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1001 12:29:29.802833    4242 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1001 12:29:29.802898    4242 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1001 12:29:29.871957    4242 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1001 12:29:29.871975    4242 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1001 12:29:29.872038    4242 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1001 12:29:29.872087    4242 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1001 12:29:29.872096    4242 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1001 12:29:29.872103    4242 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1001 12:29:29.872112    4242 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1001 12:29:29.872103    4242 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1001 12:29:29.883265    4242 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1001 12:29:29.883272    4242 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1001 12:29:29.883291    4242 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1001 12:29:29.883291    4242 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1001 12:29:29.905992    4242 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1001 12:29:29.906006    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1001 12:29:29.974572    4242 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I1001 12:29:29.974594    4242 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1001 12:29:29.974602    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1001 12:29:30.210409    4242 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1001 12:29:30.210429    4242 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1001 12:29:30.210437    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1001 12:29:30.268175    4242 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1001 12:29:30.268215    4242 cache_images.go:92] duration metric: took 3.582113333s to LoadCachedImages
	W1001 12:29:30.268253    4242 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I1001 12:29:30.268259    4242 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1001 12:29:30.268313    4242 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-810000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-810000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 12:29:30.268392    4242 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1001 12:29:30.282190    4242 cni.go:84] Creating CNI manager for ""
	I1001 12:29:30.282202    4242 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 12:29:30.282208    4242 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1001 12:29:30.282217    4242 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-810000 NodeName:running-upgrade-810000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1001 12:29:30.282288    4242 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-810000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1001 12:29:30.282349    4242 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1001 12:29:30.285605    4242 binaries.go:44] Found k8s binaries, skipping transfer
	I1001 12:29:30.285636    4242 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1001 12:29:30.288769    4242 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1001 12:29:30.293807    4242 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 12:29:30.298824    4242 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1001 12:29:30.303849    4242 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1001 12:29:30.305161    4242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 12:29:30.370363    4242 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 12:29:30.375421    4242 certs.go:68] Setting up /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/running-upgrade-810000 for IP: 10.0.2.15
	I1001 12:29:30.375430    4242 certs.go:194] generating shared ca certs ...
	I1001 12:29:30.375439    4242 certs.go:226] acquiring lock for ca certs: {Name:mk17296519b35110345119718efed98a68b82ed2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 12:29:30.375589    4242 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19736-1073/.minikube/ca.key
	I1001 12:29:30.375642    4242 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19736-1073/.minikube/proxy-client-ca.key
	I1001 12:29:30.375648    4242 certs.go:256] generating profile certs ...
	I1001 12:29:30.375704    4242 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/running-upgrade-810000/client.key
	I1001 12:29:30.375720    4242 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/running-upgrade-810000/apiserver.key.3dfb0138
	I1001 12:29:30.375732    4242 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/running-upgrade-810000/apiserver.crt.3dfb0138 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1001 12:29:30.453789    4242 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/running-upgrade-810000/apiserver.crt.3dfb0138 ...
	I1001 12:29:30.453793    4242 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/running-upgrade-810000/apiserver.crt.3dfb0138: {Name:mk0222cefed4b1761d2d01d091b2009551b9d419 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 12:29:30.454413    4242 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/running-upgrade-810000/apiserver.key.3dfb0138 ...
	I1001 12:29:30.454419    4242 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/running-upgrade-810000/apiserver.key.3dfb0138: {Name:mkd4cbf07571d096eec7325ddda7e3384833a848 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 12:29:30.454564    4242 certs.go:381] copying /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/running-upgrade-810000/apiserver.crt.3dfb0138 -> /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/running-upgrade-810000/apiserver.crt
	I1001 12:29:30.454713    4242 certs.go:385] copying /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/running-upgrade-810000/apiserver.key.3dfb0138 -> /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/running-upgrade-810000/apiserver.key
	I1001 12:29:30.454853    4242 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/running-upgrade-810000/proxy-client.key
	I1001 12:29:30.454984    4242 certs.go:484] found cert: /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/1595.pem (1338 bytes)
	W1001 12:29:30.455014    4242 certs.go:480] ignoring /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/1595_empty.pem, impossibly tiny 0 bytes
	I1001 12:29:30.455020    4242 certs.go:484] found cert: /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca-key.pem (1675 bytes)
	I1001 12:29:30.455042    4242 certs.go:484] found cert: /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem (1078 bytes)
	I1001 12:29:30.455061    4242 certs.go:484] found cert: /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/cert.pem (1123 bytes)
	I1001 12:29:30.455076    4242 certs.go:484] found cert: /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/key.pem (1675 bytes)
	I1001 12:29:30.455116    4242 certs.go:484] found cert: /Users/jenkins/minikube-integration/19736-1073/.minikube/files/etc/ssl/certs/15952.pem (1708 bytes)
	I1001 12:29:30.455439    4242 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 12:29:30.462650    4242 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 12:29:30.470365    4242 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 12:29:30.477700    4242 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1001 12:29:30.485231    4242 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/running-upgrade-810000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1001 12:29:30.492125    4242 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/running-upgrade-810000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1001 12:29:30.498973    4242 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/running-upgrade-810000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 12:29:30.505896    4242 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/running-upgrade-810000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1001 12:29:30.513250    4242 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/files/etc/ssl/certs/15952.pem --> /usr/share/ca-certificates/15952.pem (1708 bytes)
	I1001 12:29:30.520114    4242 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 12:29:30.526835    4242 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/1595.pem --> /usr/share/ca-certificates/1595.pem (1338 bytes)
	I1001 12:29:30.533597    4242 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1001 12:29:30.538677    4242 ssh_runner.go:195] Run: openssl version
	I1001 12:29:30.540531    4242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15952.pem && ln -fs /usr/share/ca-certificates/15952.pem /etc/ssl/certs/15952.pem"
	I1001 12:29:30.543957    4242 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15952.pem
	I1001 12:29:30.545508    4242 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 19:02 /usr/share/ca-certificates/15952.pem
	I1001 12:29:30.545529    4242 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15952.pem
	I1001 12:29:30.547616    4242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15952.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 12:29:30.550396    4242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 12:29:30.553594    4242 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 12:29:30.555342    4242 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 18:47 /usr/share/ca-certificates/minikubeCA.pem
	I1001 12:29:30.555369    4242 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 12:29:30.557207    4242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 12:29:30.560431    4242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1595.pem && ln -fs /usr/share/ca-certificates/1595.pem /etc/ssl/certs/1595.pem"
	I1001 12:29:30.563867    4242 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1595.pem
	I1001 12:29:30.565321    4242 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 19:02 /usr/share/ca-certificates/1595.pem
	I1001 12:29:30.565342    4242 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1595.pem
	I1001 12:29:30.567249    4242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1595.pem /etc/ssl/certs/51391683.0"
	I1001 12:29:30.569941    4242 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 12:29:30.571458    4242 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1001 12:29:30.573286    4242 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1001 12:29:30.575064    4242 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1001 12:29:30.576749    4242 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1001 12:29:30.578756    4242 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1001 12:29:30.580530    4242 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1001 12:29:30.582403    4242 kubeadm.go:392] StartCluster: {Name:running-upgrade-810000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50292 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-810000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1001 12:29:30.582477    4242 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1001 12:29:30.592502    4242 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1001 12:29:30.595933    4242 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1001 12:29:30.595946    4242 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1001 12:29:30.595979    4242 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1001 12:29:30.599072    4242 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1001 12:29:30.599318    4242 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-810000" does not appear in /Users/jenkins/minikube-integration/19736-1073/kubeconfig
	I1001 12:29:30.599371    4242 kubeconfig.go:62] /Users/jenkins/minikube-integration/19736-1073/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-810000" cluster setting kubeconfig missing "running-upgrade-810000" context setting]
	I1001 12:29:30.599513    4242 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19736-1073/kubeconfig: {Name:mkdfe60702c76fe804796a27b08676f2ebb5427f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 12:29:30.601015    4242 kapi.go:59] client config for running-upgrade-810000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/running-upgrade-810000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/running-upgrade-810000/client.key", CAFile:"/Users/jenkins/minikube-integration/19736-1073/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103f525d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1001 12:29:30.601337    4242 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1001 12:29:30.604278    4242 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-810000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1001 12:29:30.604285    4242 kubeadm.go:1160] stopping kube-system containers ...
	I1001 12:29:30.604339    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1001 12:29:30.615727    4242 docker.go:483] Stopping containers: [fc63081cff59 0670bfff123b 184a0fccf439 905688c6d37d fbe4eddea511 0d444f34fa74 85f3a613a166 fe9fb2bf08da 8f22eeb55450 f2476cd098af 1262a7e4c19e 768754dde00a]
	I1001 12:29:30.615809    4242 ssh_runner.go:195] Run: docker stop fc63081cff59 0670bfff123b 184a0fccf439 905688c6d37d fbe4eddea511 0d444f34fa74 85f3a613a166 fe9fb2bf08da 8f22eeb55450 f2476cd098af 1262a7e4c19e 768754dde00a
	I1001 12:29:30.627686    4242 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1001 12:29:30.735125    4242 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 12:29:30.739877    4242 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5639 Oct  1 19:29 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Oct  1 19:29 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Oct  1 19:29 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Oct  1 19:29 /etc/kubernetes/scheduler.conf
	
	I1001 12:29:30.739927    4242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50292 /etc/kubernetes/admin.conf
	I1001 12:29:30.743384    4242 kubeadm.go:163] "https://control-plane.minikube.internal:50292" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50292 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1001 12:29:30.743420    4242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 12:29:30.746829    4242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50292 /etc/kubernetes/kubelet.conf
	I1001 12:29:30.750248    4242 kubeadm.go:163] "https://control-plane.minikube.internal:50292" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50292 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1001 12:29:30.750279    4242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 12:29:30.753721    4242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50292 /etc/kubernetes/controller-manager.conf
	I1001 12:29:30.757062    4242 kubeadm.go:163] "https://control-plane.minikube.internal:50292" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50292 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1001 12:29:30.757098    4242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 12:29:30.760297    4242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50292 /etc/kubernetes/scheduler.conf
	I1001 12:29:30.763011    4242 kubeadm.go:163] "https://control-plane.minikube.internal:50292" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50292 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1001 12:29:30.763040    4242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 12:29:30.765618    4242 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 12:29:30.768518    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 12:29:30.797370    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 12:29:31.288190    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1001 12:29:31.462964    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 12:29:31.489033    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1001 12:29:31.515345    4242 api_server.go:52] waiting for apiserver process to appear ...
	I1001 12:29:31.515440    4242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 12:29:32.017528    4242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 12:29:32.517458    4242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 12:29:32.521657    4242 api_server.go:72] duration metric: took 1.006334208s to wait for apiserver process to appear ...
	I1001 12:29:32.521666    4242 api_server.go:88] waiting for apiserver healthz status ...
	I1001 12:29:32.521680    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:29:37.523703    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:29:37.523750    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:29:42.524095    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:29:42.524182    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:29:47.525269    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:29:47.525318    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:29:52.526512    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:29:52.526608    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:29:57.528148    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:29:57.528186    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:30:02.529816    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:30:02.529931    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:30:07.532397    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:30:07.532483    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:30:12.534976    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:30:12.535069    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:30:17.537679    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:30:17.537750    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:30:22.540342    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:30:22.540441    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:30:27.542851    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:30:27.542947    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:30:32.545655    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:30:32.546228    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:30:32.583792    4242 logs.go:276] 2 containers: [c470955dfaae fbe4eddea511]
	I1001 12:30:32.583979    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:30:32.605621    4242 logs.go:276] 2 containers: [5b9e36bfadf5 1262a7e4c19e]
	I1001 12:30:32.605733    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:30:32.620625    4242 logs.go:276] 1 containers: [f1ff198f5b54]
	I1001 12:30:32.620718    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:30:32.635931    4242 logs.go:276] 2 containers: [b0fc6eb4a300 8f22eeb55450]
	I1001 12:30:32.636022    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:30:32.646456    4242 logs.go:276] 1 containers: [696dee0aa95d]
	I1001 12:30:32.646529    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:30:32.657100    4242 logs.go:276] 2 containers: [13357b660e39 85f3a613a166]
	I1001 12:30:32.657184    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:30:32.671699    4242 logs.go:276] 0 containers: []
	W1001 12:30:32.671711    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:30:32.671785    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:30:32.685391    4242 logs.go:276] 2 containers: [992f91ff2f53 9729c1a1e22d]
	I1001 12:30:32.685410    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:30:32.685416    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:30:32.757656    4242 logs.go:123] Gathering logs for kube-apiserver [fbe4eddea511] ...
	I1001 12:30:32.757669    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe4eddea511"
	I1001 12:30:32.779197    4242 logs.go:123] Gathering logs for etcd [1262a7e4c19e] ...
	I1001 12:30:32.779208    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1262a7e4c19e"
	I1001 12:30:32.794149    4242 logs.go:123] Gathering logs for coredns [f1ff198f5b54] ...
	I1001 12:30:32.794160    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ff198f5b54"
	I1001 12:30:32.805229    4242 logs.go:123] Gathering logs for kube-scheduler [8f22eeb55450] ...
	I1001 12:30:32.805239    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f22eeb55450"
	I1001 12:30:32.822386    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:30:32.822398    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:30:32.826808    4242 logs.go:123] Gathering logs for kube-apiserver [c470955dfaae] ...
	I1001 12:30:32.826818    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c470955dfaae"
	I1001 12:30:32.840991    4242 logs.go:123] Gathering logs for kube-proxy [696dee0aa95d] ...
	I1001 12:30:32.841002    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 696dee0aa95d"
	I1001 12:30:32.852993    4242 logs.go:123] Gathering logs for kube-controller-manager [13357b660e39] ...
	I1001 12:30:32.853004    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13357b660e39"
	I1001 12:30:32.871890    4242 logs.go:123] Gathering logs for kube-controller-manager [85f3a613a166] ...
	I1001 12:30:32.871900    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85f3a613a166"
	I1001 12:30:32.885351    4242 logs.go:123] Gathering logs for storage-provisioner [992f91ff2f53] ...
	I1001 12:30:32.885362    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992f91ff2f53"
	I1001 12:30:32.897371    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:30:32.897386    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:30:32.909681    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:30:32.909710    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:30:32.946991    4242 logs.go:123] Gathering logs for storage-provisioner [9729c1a1e22d] ...
	I1001 12:30:32.946997    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9729c1a1e22d"
	I1001 12:30:32.958537    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:30:32.958549    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:30:32.984149    4242 logs.go:123] Gathering logs for etcd [5b9e36bfadf5] ...
	I1001 12:30:32.984157    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b9e36bfadf5"
	I1001 12:30:32.997794    4242 logs.go:123] Gathering logs for kube-scheduler [b0fc6eb4a300] ...
	I1001 12:30:32.997807    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fc6eb4a300"
	I1001 12:30:35.511808    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:30:40.514510    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:30:40.515121    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:30:40.559041    4242 logs.go:276] 2 containers: [c470955dfaae fbe4eddea511]
	I1001 12:30:40.559200    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:30:40.579060    4242 logs.go:276] 2 containers: [5b9e36bfadf5 1262a7e4c19e]
	I1001 12:30:40.579161    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:30:40.593068    4242 logs.go:276] 1 containers: [f1ff198f5b54]
	I1001 12:30:40.593142    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:30:40.606607    4242 logs.go:276] 2 containers: [b0fc6eb4a300 8f22eeb55450]
	I1001 12:30:40.606693    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:30:40.619201    4242 logs.go:276] 1 containers: [696dee0aa95d]
	I1001 12:30:40.619291    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:30:40.633178    4242 logs.go:276] 2 containers: [13357b660e39 85f3a613a166]
	I1001 12:30:40.633250    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:30:40.644488    4242 logs.go:276] 0 containers: []
	W1001 12:30:40.644498    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:30:40.644559    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:30:40.655060    4242 logs.go:276] 2 containers: [992f91ff2f53 9729c1a1e22d]
	I1001 12:30:40.655086    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:30:40.655091    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:30:40.679466    4242 logs.go:123] Gathering logs for kube-apiserver [fbe4eddea511] ...
	I1001 12:30:40.679473    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe4eddea511"
	I1001 12:30:40.699942    4242 logs.go:123] Gathering logs for kube-controller-manager [85f3a613a166] ...
	I1001 12:30:40.699954    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85f3a613a166"
	I1001 12:30:40.713117    4242 logs.go:123] Gathering logs for etcd [1262a7e4c19e] ...
	I1001 12:30:40.713127    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1262a7e4c19e"
	I1001 12:30:40.727420    4242 logs.go:123] Gathering logs for kube-scheduler [8f22eeb55450] ...
	I1001 12:30:40.727430    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f22eeb55450"
	I1001 12:30:40.743603    4242 logs.go:123] Gathering logs for kube-apiserver [c470955dfaae] ...
	I1001 12:30:40.743615    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c470955dfaae"
	I1001 12:30:40.757465    4242 logs.go:123] Gathering logs for kube-controller-manager [13357b660e39] ...
	I1001 12:30:40.757475    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13357b660e39"
	I1001 12:30:40.774564    4242 logs.go:123] Gathering logs for storage-provisioner [992f91ff2f53] ...
	I1001 12:30:40.774573    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992f91ff2f53"
	I1001 12:30:40.787553    4242 logs.go:123] Gathering logs for storage-provisioner [9729c1a1e22d] ...
	I1001 12:30:40.787563    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9729c1a1e22d"
	I1001 12:30:40.799201    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:30:40.799214    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:30:40.811889    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:30:40.811898    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:30:40.816759    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:30:40.816767    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:30:40.852784    4242 logs.go:123] Gathering logs for coredns [f1ff198f5b54] ...
	I1001 12:30:40.852795    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ff198f5b54"
	I1001 12:30:40.867503    4242 logs.go:123] Gathering logs for kube-scheduler [b0fc6eb4a300] ...
	I1001 12:30:40.867512    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fc6eb4a300"
	I1001 12:30:40.879511    4242 logs.go:123] Gathering logs for kube-proxy [696dee0aa95d] ...
	I1001 12:30:40.879521    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 696dee0aa95d"
	I1001 12:30:40.890987    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:30:40.890997    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:30:40.925940    4242 logs.go:123] Gathering logs for etcd [5b9e36bfadf5] ...
	I1001 12:30:40.925948    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b9e36bfadf5"
	I1001 12:30:43.441947    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:30:48.443822    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:30:48.444484    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:30:48.483403    4242 logs.go:276] 2 containers: [c470955dfaae fbe4eddea511]
	I1001 12:30:48.483588    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:30:48.504148    4242 logs.go:276] 2 containers: [5b9e36bfadf5 1262a7e4c19e]
	I1001 12:30:48.504258    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:30:48.519611    4242 logs.go:276] 1 containers: [f1ff198f5b54]
	I1001 12:30:48.519702    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:30:48.532587    4242 logs.go:276] 2 containers: [b0fc6eb4a300 8f22eeb55450]
	I1001 12:30:48.532678    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:30:48.543541    4242 logs.go:276] 1 containers: [696dee0aa95d]
	I1001 12:30:48.543632    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:30:48.554409    4242 logs.go:276] 2 containers: [13357b660e39 85f3a613a166]
	I1001 12:30:48.554500    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:30:48.564849    4242 logs.go:276] 0 containers: []
	W1001 12:30:48.564871    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:30:48.564939    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:30:48.575301    4242 logs.go:276] 2 containers: [992f91ff2f53 9729c1a1e22d]
	I1001 12:30:48.575319    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:30:48.575324    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:30:48.611371    4242 logs.go:123] Gathering logs for etcd [5b9e36bfadf5] ...
	I1001 12:30:48.611380    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b9e36bfadf5"
	I1001 12:30:48.628340    4242 logs.go:123] Gathering logs for kube-controller-manager [13357b660e39] ...
	I1001 12:30:48.628352    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13357b660e39"
	I1001 12:30:48.645733    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:30:48.645745    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:30:48.669757    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:30:48.669765    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:30:48.673839    4242 logs.go:123] Gathering logs for etcd [1262a7e4c19e] ...
	I1001 12:30:48.673848    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1262a7e4c19e"
	I1001 12:30:48.689879    4242 logs.go:123] Gathering logs for kube-scheduler [b0fc6eb4a300] ...
	I1001 12:30:48.689888    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fc6eb4a300"
	I1001 12:30:48.705274    4242 logs.go:123] Gathering logs for kube-scheduler [8f22eeb55450] ...
	I1001 12:30:48.705286    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f22eeb55450"
	I1001 12:30:48.721458    4242 logs.go:123] Gathering logs for kube-apiserver [fbe4eddea511] ...
	I1001 12:30:48.721469    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe4eddea511"
	I1001 12:30:48.740338    4242 logs.go:123] Gathering logs for kube-proxy [696dee0aa95d] ...
	I1001 12:30:48.740348    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 696dee0aa95d"
	I1001 12:30:48.751678    4242 logs.go:123] Gathering logs for storage-provisioner [992f91ff2f53] ...
	I1001 12:30:48.751691    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992f91ff2f53"
	I1001 12:30:48.762914    4242 logs.go:123] Gathering logs for storage-provisioner [9729c1a1e22d] ...
	I1001 12:30:48.762927    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9729c1a1e22d"
	I1001 12:30:48.773815    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:30:48.773835    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:30:48.785523    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:30:48.785534    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:30:48.820816    4242 logs.go:123] Gathering logs for kube-apiserver [c470955dfaae] ...
	I1001 12:30:48.820827    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c470955dfaae"
	I1001 12:30:48.834201    4242 logs.go:123] Gathering logs for coredns [f1ff198f5b54] ...
	I1001 12:30:48.834213    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ff198f5b54"
	I1001 12:30:48.846097    4242 logs.go:123] Gathering logs for kube-controller-manager [85f3a613a166] ...
	I1001 12:30:48.846110    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85f3a613a166"
	I1001 12:30:51.370454    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:30:56.371637    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:30:56.371868    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:30:56.389892    4242 logs.go:276] 2 containers: [c470955dfaae fbe4eddea511]
	I1001 12:30:56.389998    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:30:56.406264    4242 logs.go:276] 2 containers: [5b9e36bfadf5 1262a7e4c19e]
	I1001 12:30:56.406350    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:30:56.417235    4242 logs.go:276] 1 containers: [f1ff198f5b54]
	I1001 12:30:56.417319    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:30:56.428463    4242 logs.go:276] 2 containers: [b0fc6eb4a300 8f22eeb55450]
	I1001 12:30:56.428549    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:30:56.439028    4242 logs.go:276] 1 containers: [696dee0aa95d]
	I1001 12:30:56.439099    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:30:56.449004    4242 logs.go:276] 2 containers: [13357b660e39 85f3a613a166]
	I1001 12:30:56.449089    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:30:56.460049    4242 logs.go:276] 0 containers: []
	W1001 12:30:56.460061    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:30:56.460123    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:30:56.471008    4242 logs.go:276] 2 containers: [992f91ff2f53 9729c1a1e22d]
	I1001 12:30:56.471026    4242 logs.go:123] Gathering logs for storage-provisioner [992f91ff2f53] ...
	I1001 12:30:56.471031    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992f91ff2f53"
	I1001 12:30:56.483128    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:30:56.483144    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:30:56.487846    4242 logs.go:123] Gathering logs for kube-apiserver [c470955dfaae] ...
	I1001 12:30:56.487855    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c470955dfaae"
	I1001 12:30:56.501505    4242 logs.go:123] Gathering logs for storage-provisioner [9729c1a1e22d] ...
	I1001 12:30:56.501518    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9729c1a1e22d"
	I1001 12:30:56.517531    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:30:56.517544    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:30:56.541031    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:30:56.541039    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:30:56.552254    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:30:56.552264    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:30:56.588343    4242 logs.go:123] Gathering logs for kube-scheduler [b0fc6eb4a300] ...
	I1001 12:30:56.588354    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fc6eb4a300"
	I1001 12:30:56.600372    4242 logs.go:123] Gathering logs for kube-controller-manager [85f3a613a166] ...
	I1001 12:30:56.600388    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85f3a613a166"
	I1001 12:30:56.613289    4242 logs.go:123] Gathering logs for etcd [1262a7e4c19e] ...
	I1001 12:30:56.613300    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1262a7e4c19e"
	I1001 12:30:56.629787    4242 logs.go:123] Gathering logs for coredns [f1ff198f5b54] ...
	I1001 12:30:56.629797    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ff198f5b54"
	I1001 12:30:56.640979    4242 logs.go:123] Gathering logs for kube-scheduler [8f22eeb55450] ...
	I1001 12:30:56.640991    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f22eeb55450"
	I1001 12:30:56.658937    4242 logs.go:123] Gathering logs for kube-proxy [696dee0aa95d] ...
	I1001 12:30:56.658948    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 696dee0aa95d"
	I1001 12:30:56.670283    4242 logs.go:123] Gathering logs for kube-controller-manager [13357b660e39] ...
	I1001 12:30:56.670293    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13357b660e39"
	I1001 12:30:56.688067    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:30:56.688091    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:30:56.724438    4242 logs.go:123] Gathering logs for kube-apiserver [fbe4eddea511] ...
	I1001 12:30:56.724448    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe4eddea511"
	I1001 12:30:56.743088    4242 logs.go:123] Gathering logs for etcd [5b9e36bfadf5] ...
	I1001 12:30:56.743105    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b9e36bfadf5"
	I1001 12:30:59.258296    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:31:04.261004    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:31:04.261404    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:31:04.295893    4242 logs.go:276] 2 containers: [c470955dfaae fbe4eddea511]
	I1001 12:31:04.296048    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:31:04.320216    4242 logs.go:276] 2 containers: [5b9e36bfadf5 1262a7e4c19e]
	I1001 12:31:04.320353    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:31:04.337906    4242 logs.go:276] 1 containers: [f1ff198f5b54]
	I1001 12:31:04.337989    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:31:04.349640    4242 logs.go:276] 2 containers: [b0fc6eb4a300 8f22eeb55450]
	I1001 12:31:04.349720    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:31:04.359729    4242 logs.go:276] 1 containers: [696dee0aa95d]
	I1001 12:31:04.359806    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:31:04.371403    4242 logs.go:276] 2 containers: [13357b660e39 85f3a613a166]
	I1001 12:31:04.371468    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:31:04.381916    4242 logs.go:276] 0 containers: []
	W1001 12:31:04.381932    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:31:04.381994    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:31:04.392582    4242 logs.go:276] 2 containers: [992f91ff2f53 9729c1a1e22d]
	I1001 12:31:04.392601    4242 logs.go:123] Gathering logs for etcd [5b9e36bfadf5] ...
	I1001 12:31:04.392607    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b9e36bfadf5"
	I1001 12:31:04.407179    4242 logs.go:123] Gathering logs for coredns [f1ff198f5b54] ...
	I1001 12:31:04.407193    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ff198f5b54"
	I1001 12:31:04.418615    4242 logs.go:123] Gathering logs for kube-controller-manager [13357b660e39] ...
	I1001 12:31:04.418629    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13357b660e39"
	I1001 12:31:04.441738    4242 logs.go:123] Gathering logs for kube-controller-manager [85f3a613a166] ...
	I1001 12:31:04.441748    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85f3a613a166"
	I1001 12:31:04.454813    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:31:04.454824    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:31:04.478877    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:31:04.478883    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:31:04.516847    4242 logs.go:123] Gathering logs for etcd [1262a7e4c19e] ...
	I1001 12:31:04.516857    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1262a7e4c19e"
	I1001 12:31:04.531185    4242 logs.go:123] Gathering logs for kube-scheduler [8f22eeb55450] ...
	I1001 12:31:04.531196    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f22eeb55450"
	I1001 12:31:04.547514    4242 logs.go:123] Gathering logs for kube-proxy [696dee0aa95d] ...
	I1001 12:31:04.547527    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 696dee0aa95d"
	I1001 12:31:04.560221    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:31:04.560232    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:31:04.573291    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:31:04.573303    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:31:04.577695    4242 logs.go:123] Gathering logs for kube-apiserver [fbe4eddea511] ...
	I1001 12:31:04.577701    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe4eddea511"
	I1001 12:31:04.597668    4242 logs.go:123] Gathering logs for storage-provisioner [992f91ff2f53] ...
	I1001 12:31:04.597678    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992f91ff2f53"
	I1001 12:31:04.613558    4242 logs.go:123] Gathering logs for storage-provisioner [9729c1a1e22d] ...
	I1001 12:31:04.613574    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9729c1a1e22d"
	I1001 12:31:04.627064    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:31:04.627079    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:31:04.663492    4242 logs.go:123] Gathering logs for kube-apiserver [c470955dfaae] ...
	I1001 12:31:04.663501    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c470955dfaae"
	I1001 12:31:04.677418    4242 logs.go:123] Gathering logs for kube-scheduler [b0fc6eb4a300] ...
	I1001 12:31:04.677427    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fc6eb4a300"
	I1001 12:31:07.189538    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:31:12.191967    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:31:12.192474    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:31:12.232637    4242 logs.go:276] 2 containers: [c470955dfaae fbe4eddea511]
	I1001 12:31:12.232797    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:31:12.254066    4242 logs.go:276] 2 containers: [5b9e36bfadf5 1262a7e4c19e]
	I1001 12:31:12.254182    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:31:12.268823    4242 logs.go:276] 1 containers: [f1ff198f5b54]
	I1001 12:31:12.268909    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:31:12.281360    4242 logs.go:276] 2 containers: [b0fc6eb4a300 8f22eeb55450]
	I1001 12:31:12.281442    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:31:12.291962    4242 logs.go:276] 1 containers: [696dee0aa95d]
	I1001 12:31:12.292040    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:31:12.303176    4242 logs.go:276] 2 containers: [13357b660e39 85f3a613a166]
	I1001 12:31:12.303251    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:31:12.313837    4242 logs.go:276] 0 containers: []
	W1001 12:31:12.313848    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:31:12.313915    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:31:12.324161    4242 logs.go:276] 2 containers: [992f91ff2f53 9729c1a1e22d]
	I1001 12:31:12.324182    4242 logs.go:123] Gathering logs for coredns [f1ff198f5b54] ...
	I1001 12:31:12.324188    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ff198f5b54"
	I1001 12:31:12.338091    4242 logs.go:123] Gathering logs for kube-scheduler [b0fc6eb4a300] ...
	I1001 12:31:12.338105    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fc6eb4a300"
	I1001 12:31:12.349687    4242 logs.go:123] Gathering logs for storage-provisioner [9729c1a1e22d] ...
	I1001 12:31:12.349696    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9729c1a1e22d"
	I1001 12:31:12.361102    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:31:12.361112    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:31:12.397731    4242 logs.go:123] Gathering logs for kube-apiserver [fbe4eddea511] ...
	I1001 12:31:12.397740    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe4eddea511"
	I1001 12:31:12.416363    4242 logs.go:123] Gathering logs for kube-proxy [696dee0aa95d] ...
	I1001 12:31:12.416374    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 696dee0aa95d"
	I1001 12:31:12.428273    4242 logs.go:123] Gathering logs for storage-provisioner [992f91ff2f53] ...
	I1001 12:31:12.428283    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992f91ff2f53"
	I1001 12:31:12.440402    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:31:12.440413    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:31:12.475073    4242 logs.go:123] Gathering logs for kube-apiserver [c470955dfaae] ...
	I1001 12:31:12.475086    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c470955dfaae"
	I1001 12:31:12.488853    4242 logs.go:123] Gathering logs for etcd [1262a7e4c19e] ...
	I1001 12:31:12.488866    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1262a7e4c19e"
	I1001 12:31:12.503114    4242 logs.go:123] Gathering logs for kube-controller-manager [13357b660e39] ...
	I1001 12:31:12.503124    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13357b660e39"
	I1001 12:31:12.520739    4242 logs.go:123] Gathering logs for kube-controller-manager [85f3a613a166] ...
	I1001 12:31:12.520749    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85f3a613a166"
	I1001 12:31:12.533241    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:31:12.533252    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:31:12.557946    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:31:12.557953    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:31:12.562003    4242 logs.go:123] Gathering logs for etcd [5b9e36bfadf5] ...
	I1001 12:31:12.562009    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b9e36bfadf5"
	I1001 12:31:12.575708    4242 logs.go:123] Gathering logs for kube-scheduler [8f22eeb55450] ...
	I1001 12:31:12.575718    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f22eeb55450"
	I1001 12:31:12.592093    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:31:12.592105    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:31:15.105434    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:31:20.107172    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:31:20.107672    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:31:20.142508    4242 logs.go:276] 2 containers: [c470955dfaae fbe4eddea511]
	I1001 12:31:20.142667    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:31:20.164687    4242 logs.go:276] 2 containers: [5b9e36bfadf5 1262a7e4c19e]
	I1001 12:31:20.164826    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:31:20.180686    4242 logs.go:276] 1 containers: [f1ff198f5b54]
	I1001 12:31:20.180791    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:31:20.196112    4242 logs.go:276] 2 containers: [b0fc6eb4a300 8f22eeb55450]
	I1001 12:31:20.196205    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:31:20.207103    4242 logs.go:276] 1 containers: [696dee0aa95d]
	I1001 12:31:20.207182    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:31:20.217572    4242 logs.go:276] 2 containers: [13357b660e39 85f3a613a166]
	I1001 12:31:20.217664    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:31:20.227700    4242 logs.go:276] 0 containers: []
	W1001 12:31:20.227710    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:31:20.227774    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:31:20.237925    4242 logs.go:276] 2 containers: [992f91ff2f53 9729c1a1e22d]
	I1001 12:31:20.237947    4242 logs.go:123] Gathering logs for kube-apiserver [fbe4eddea511] ...
	I1001 12:31:20.237953    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe4eddea511"
	I1001 12:31:20.256046    4242 logs.go:123] Gathering logs for storage-provisioner [992f91ff2f53] ...
	I1001 12:31:20.256055    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992f91ff2f53"
	I1001 12:31:20.267456    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:31:20.267467    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:31:20.303255    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:31:20.303263    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:31:20.337671    4242 logs.go:123] Gathering logs for etcd [1262a7e4c19e] ...
	I1001 12:31:20.337683    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1262a7e4c19e"
	I1001 12:31:20.352114    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:31:20.352129    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:31:20.375682    4242 logs.go:123] Gathering logs for kube-apiserver [c470955dfaae] ...
	I1001 12:31:20.375689    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c470955dfaae"
	I1001 12:31:20.389328    4242 logs.go:123] Gathering logs for kube-controller-manager [13357b660e39] ...
	I1001 12:31:20.389338    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13357b660e39"
	I1001 12:31:20.406973    4242 logs.go:123] Gathering logs for storage-provisioner [9729c1a1e22d] ...
	I1001 12:31:20.406985    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9729c1a1e22d"
	I1001 12:31:20.418534    4242 logs.go:123] Gathering logs for kube-scheduler [b0fc6eb4a300] ...
	I1001 12:31:20.418545    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fc6eb4a300"
	I1001 12:31:20.430254    4242 logs.go:123] Gathering logs for kube-scheduler [8f22eeb55450] ...
	I1001 12:31:20.430266    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f22eeb55450"
	I1001 12:31:20.446759    4242 logs.go:123] Gathering logs for kube-proxy [696dee0aa95d] ...
	I1001 12:31:20.446771    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 696dee0aa95d"
	I1001 12:31:20.458896    4242 logs.go:123] Gathering logs for kube-controller-manager [85f3a613a166] ...
	I1001 12:31:20.458907    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85f3a613a166"
	I1001 12:31:20.471342    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:31:20.471352    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:31:20.482908    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:31:20.482919    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:31:20.487604    4242 logs.go:123] Gathering logs for etcd [5b9e36bfadf5] ...
	I1001 12:31:20.487614    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b9e36bfadf5"
	I1001 12:31:20.500986    4242 logs.go:123] Gathering logs for coredns [f1ff198f5b54] ...
	I1001 12:31:20.500995    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ff198f5b54"
	I1001 12:31:23.014420    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:31:28.017042    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:31:28.017261    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:31:28.029660    4242 logs.go:276] 2 containers: [c470955dfaae fbe4eddea511]
	I1001 12:31:28.029758    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:31:28.046408    4242 logs.go:276] 2 containers: [5b9e36bfadf5 1262a7e4c19e]
	I1001 12:31:28.046498    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:31:28.056725    4242 logs.go:276] 1 containers: [f1ff198f5b54]
	I1001 12:31:28.056811    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:31:28.067606    4242 logs.go:276] 2 containers: [b0fc6eb4a300 8f22eeb55450]
	I1001 12:31:28.067684    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:31:28.078407    4242 logs.go:276] 1 containers: [696dee0aa95d]
	I1001 12:31:28.078491    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:31:28.089122    4242 logs.go:276] 2 containers: [13357b660e39 85f3a613a166]
	I1001 12:31:28.089194    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:31:28.099508    4242 logs.go:276] 0 containers: []
	W1001 12:31:28.099522    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:31:28.099598    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:31:28.110091    4242 logs.go:276] 2 containers: [992f91ff2f53 9729c1a1e22d]
	I1001 12:31:28.110111    4242 logs.go:123] Gathering logs for kube-proxy [696dee0aa95d] ...
	I1001 12:31:28.110117    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 696dee0aa95d"
	I1001 12:31:28.121746    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:31:28.121757    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:31:28.147534    4242 logs.go:123] Gathering logs for etcd [5b9e36bfadf5] ...
	I1001 12:31:28.147544    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b9e36bfadf5"
	I1001 12:31:28.165301    4242 logs.go:123] Gathering logs for kube-apiserver [fbe4eddea511] ...
	I1001 12:31:28.165312    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe4eddea511"
	I1001 12:31:28.184657    4242 logs.go:123] Gathering logs for kube-controller-manager [85f3a613a166] ...
	I1001 12:31:28.184666    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85f3a613a166"
	I1001 12:31:28.201149    4242 logs.go:123] Gathering logs for storage-provisioner [9729c1a1e22d] ...
	I1001 12:31:28.201162    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9729c1a1e22d"
	I1001 12:31:28.212789    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:31:28.212801    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:31:28.250546    4242 logs.go:123] Gathering logs for coredns [f1ff198f5b54] ...
	I1001 12:31:28.250557    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ff198f5b54"
	I1001 12:31:28.262018    4242 logs.go:123] Gathering logs for kube-scheduler [b0fc6eb4a300] ...
	I1001 12:31:28.262032    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fc6eb4a300"
	I1001 12:31:28.274445    4242 logs.go:123] Gathering logs for etcd [1262a7e4c19e] ...
	I1001 12:31:28.274461    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1262a7e4c19e"
	I1001 12:31:28.289469    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:31:28.289485    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:31:28.293874    4242 logs.go:123] Gathering logs for kube-apiserver [c470955dfaae] ...
	I1001 12:31:28.293881    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c470955dfaae"
	I1001 12:31:28.309103    4242 logs.go:123] Gathering logs for kube-scheduler [8f22eeb55450] ...
	I1001 12:31:28.309114    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f22eeb55450"
	I1001 12:31:28.325575    4242 logs.go:123] Gathering logs for kube-controller-manager [13357b660e39] ...
	I1001 12:31:28.325588    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13357b660e39"
	I1001 12:31:28.343551    4242 logs.go:123] Gathering logs for storage-provisioner [992f91ff2f53] ...
	I1001 12:31:28.343565    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992f91ff2f53"
	I1001 12:31:28.358043    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:31:28.358054    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:31:28.370245    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:31:28.370254    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:31:30.910174    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:31:35.911201    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:31:35.911731    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:31:35.953253    4242 logs.go:276] 2 containers: [c470955dfaae fbe4eddea511]
	I1001 12:31:35.953429    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:31:35.974878    4242 logs.go:276] 2 containers: [5b9e36bfadf5 1262a7e4c19e]
	I1001 12:31:35.975005    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:31:35.989641    4242 logs.go:276] 1 containers: [f1ff198f5b54]
	I1001 12:31:35.989736    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:31:36.005675    4242 logs.go:276] 2 containers: [b0fc6eb4a300 8f22eeb55450]
	I1001 12:31:36.005766    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:31:36.030790    4242 logs.go:276] 1 containers: [696dee0aa95d]
	I1001 12:31:36.030872    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:31:36.046158    4242 logs.go:276] 2 containers: [13357b660e39 85f3a613a166]
	I1001 12:31:36.046247    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:31:36.058483    4242 logs.go:276] 0 containers: []
	W1001 12:31:36.058495    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:31:36.058572    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:31:36.069184    4242 logs.go:276] 2 containers: [992f91ff2f53 9729c1a1e22d]
	I1001 12:31:36.069203    4242 logs.go:123] Gathering logs for etcd [5b9e36bfadf5] ...
	I1001 12:31:36.069208    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b9e36bfadf5"
	I1001 12:31:36.083245    4242 logs.go:123] Gathering logs for etcd [1262a7e4c19e] ...
	I1001 12:31:36.083256    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1262a7e4c19e"
	I1001 12:31:36.097980    4242 logs.go:123] Gathering logs for kube-scheduler [b0fc6eb4a300] ...
	I1001 12:31:36.097992    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fc6eb4a300"
	I1001 12:31:36.110270    4242 logs.go:123] Gathering logs for kube-scheduler [8f22eeb55450] ...
	I1001 12:31:36.110280    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f22eeb55450"
	I1001 12:31:36.127164    4242 logs.go:123] Gathering logs for kube-controller-manager [13357b660e39] ...
	I1001 12:31:36.127182    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13357b660e39"
	I1001 12:31:36.166175    4242 logs.go:123] Gathering logs for kube-controller-manager [85f3a613a166] ...
	I1001 12:31:36.166186    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85f3a613a166"
	I1001 12:31:36.179083    4242 logs.go:123] Gathering logs for storage-provisioner [9729c1a1e22d] ...
	I1001 12:31:36.179093    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9729c1a1e22d"
	I1001 12:31:36.190289    4242 logs.go:123] Gathering logs for kube-apiserver [c470955dfaae] ...
	I1001 12:31:36.190301    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c470955dfaae"
	I1001 12:31:36.205904    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:31:36.205932    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:31:36.229477    4242 logs.go:123] Gathering logs for kube-proxy [696dee0aa95d] ...
	I1001 12:31:36.229485    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 696dee0aa95d"
	I1001 12:31:36.241472    4242 logs.go:123] Gathering logs for storage-provisioner [992f91ff2f53] ...
	I1001 12:31:36.241483    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992f91ff2f53"
	I1001 12:31:36.253075    4242 logs.go:123] Gathering logs for kube-apiserver [fbe4eddea511] ...
	I1001 12:31:36.253083    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe4eddea511"
	I1001 12:31:36.272644    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:31:36.272653    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:31:36.277123    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:31:36.277132    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:31:36.311174    4242 logs.go:123] Gathering logs for coredns [f1ff198f5b54] ...
	I1001 12:31:36.311188    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ff198f5b54"
	I1001 12:31:36.322791    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:31:36.322803    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:31:36.335145    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:31:36.335155    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:31:38.874440    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:31:43.876795    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:31:43.876938    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:31:43.888482    4242 logs.go:276] 2 containers: [c470955dfaae fbe4eddea511]
	I1001 12:31:43.888564    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:31:43.899727    4242 logs.go:276] 2 containers: [5b9e36bfadf5 1262a7e4c19e]
	I1001 12:31:43.899815    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:31:43.910724    4242 logs.go:276] 1 containers: [f1ff198f5b54]
	I1001 12:31:43.910804    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:31:43.921594    4242 logs.go:276] 2 containers: [b0fc6eb4a300 8f22eeb55450]
	I1001 12:31:43.921689    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:31:43.932830    4242 logs.go:276] 1 containers: [696dee0aa95d]
	I1001 12:31:43.932918    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:31:43.951071    4242 logs.go:276] 2 containers: [13357b660e39 85f3a613a166]
	I1001 12:31:43.951154    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:31:43.962012    4242 logs.go:276] 0 containers: []
	W1001 12:31:43.962023    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:31:43.962096    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:31:43.972876    4242 logs.go:276] 2 containers: [992f91ff2f53 9729c1a1e22d]
	I1001 12:31:43.972897    4242 logs.go:123] Gathering logs for etcd [5b9e36bfadf5] ...
	I1001 12:31:43.972905    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b9e36bfadf5"
	I1001 12:31:43.987549    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:31:43.987562    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:31:44.025260    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:31:44.025270    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:31:44.061151    4242 logs.go:123] Gathering logs for kube-apiserver [c470955dfaae] ...
	I1001 12:31:44.061166    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c470955dfaae"
	I1001 12:31:44.075842    4242 logs.go:123] Gathering logs for coredns [f1ff198f5b54] ...
	I1001 12:31:44.075860    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ff198f5b54"
	I1001 12:31:44.087169    4242 logs.go:123] Gathering logs for kube-scheduler [8f22eeb55450] ...
	I1001 12:31:44.087183    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f22eeb55450"
	I1001 12:31:44.104969    4242 logs.go:123] Gathering logs for kube-controller-manager [13357b660e39] ...
	I1001 12:31:44.104982    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13357b660e39"
	I1001 12:31:44.123820    4242 logs.go:123] Gathering logs for kube-apiserver [fbe4eddea511] ...
	I1001 12:31:44.123838    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe4eddea511"
	I1001 12:31:44.144107    4242 logs.go:123] Gathering logs for etcd [1262a7e4c19e] ...
	I1001 12:31:44.144128    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1262a7e4c19e"
	I1001 12:31:44.159873    4242 logs.go:123] Gathering logs for kube-scheduler [b0fc6eb4a300] ...
	I1001 12:31:44.159890    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fc6eb4a300"
	I1001 12:31:44.174448    4242 logs.go:123] Gathering logs for storage-provisioner [992f91ff2f53] ...
	I1001 12:31:44.174461    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992f91ff2f53"
	I1001 12:31:44.186557    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:31:44.186569    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:31:44.198095    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:31:44.198107    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:31:44.202331    4242 logs.go:123] Gathering logs for kube-proxy [696dee0aa95d] ...
	I1001 12:31:44.202340    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 696dee0aa95d"
	I1001 12:31:44.213882    4242 logs.go:123] Gathering logs for kube-controller-manager [85f3a613a166] ...
	I1001 12:31:44.213897    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85f3a613a166"
	I1001 12:31:44.227172    4242 logs.go:123] Gathering logs for storage-provisioner [9729c1a1e22d] ...
	I1001 12:31:44.227184    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9729c1a1e22d"
	I1001 12:31:44.238831    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:31:44.238843    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:31:46.766052    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:31:51.768181    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:31:51.768324    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:31:51.780151    4242 logs.go:276] 2 containers: [c470955dfaae fbe4eddea511]
	I1001 12:31:51.780249    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:31:51.791070    4242 logs.go:276] 2 containers: [5b9e36bfadf5 1262a7e4c19e]
	I1001 12:31:51.791157    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:31:51.801183    4242 logs.go:276] 1 containers: [f1ff198f5b54]
	I1001 12:31:51.801267    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:31:51.811989    4242 logs.go:276] 2 containers: [b0fc6eb4a300 8f22eeb55450]
	I1001 12:31:51.812071    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:31:51.822258    4242 logs.go:276] 1 containers: [696dee0aa95d]
	I1001 12:31:51.822347    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:31:51.832444    4242 logs.go:276] 2 containers: [13357b660e39 85f3a613a166]
	I1001 12:31:51.832525    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:31:51.843218    4242 logs.go:276] 0 containers: []
	W1001 12:31:51.843233    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:31:51.843296    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:31:51.858073    4242 logs.go:276] 2 containers: [992f91ff2f53 9729c1a1e22d]
	I1001 12:31:51.858094    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:31:51.858099    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:31:51.895277    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:31:51.895288    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:31:51.934073    4242 logs.go:123] Gathering logs for coredns [f1ff198f5b54] ...
	I1001 12:31:51.934086    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ff198f5b54"
	I1001 12:31:51.945174    4242 logs.go:123] Gathering logs for kube-scheduler [b0fc6eb4a300] ...
	I1001 12:31:51.945188    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fc6eb4a300"
	I1001 12:31:51.962861    4242 logs.go:123] Gathering logs for kube-scheduler [8f22eeb55450] ...
	I1001 12:31:51.962871    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f22eeb55450"
	I1001 12:31:51.979394    4242 logs.go:123] Gathering logs for kube-proxy [696dee0aa95d] ...
	I1001 12:31:51.979405    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 696dee0aa95d"
	I1001 12:31:51.991245    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:31:51.991256    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:31:51.995694    4242 logs.go:123] Gathering logs for kube-apiserver [fbe4eddea511] ...
	I1001 12:31:51.995703    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe4eddea511"
	I1001 12:31:52.015274    4242 logs.go:123] Gathering logs for etcd [1262a7e4c19e] ...
	I1001 12:31:52.015289    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1262a7e4c19e"
	I1001 12:31:52.030197    4242 logs.go:123] Gathering logs for kube-controller-manager [13357b660e39] ...
	I1001 12:31:52.030208    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13357b660e39"
	I1001 12:31:52.047390    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:31:52.047401    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:31:52.059336    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:31:52.059349    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:31:52.084018    4242 logs.go:123] Gathering logs for kube-apiserver [c470955dfaae] ...
	I1001 12:31:52.084027    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c470955dfaae"
	I1001 12:31:52.098472    4242 logs.go:123] Gathering logs for etcd [5b9e36bfadf5] ...
	I1001 12:31:52.098486    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b9e36bfadf5"
	I1001 12:31:52.112648    4242 logs.go:123] Gathering logs for kube-controller-manager [85f3a613a166] ...
	I1001 12:31:52.112659    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85f3a613a166"
	I1001 12:31:52.125335    4242 logs.go:123] Gathering logs for storage-provisioner [992f91ff2f53] ...
	I1001 12:31:52.125346    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992f91ff2f53"
	I1001 12:31:52.136628    4242 logs.go:123] Gathering logs for storage-provisioner [9729c1a1e22d] ...
	I1001 12:31:52.136640    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9729c1a1e22d"
	I1001 12:31:54.650151    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:31:59.652596    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:31:59.652727    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:31:59.667725    4242 logs.go:276] 2 containers: [c470955dfaae fbe4eddea511]
	I1001 12:31:59.667809    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:31:59.679373    4242 logs.go:276] 2 containers: [5b9e36bfadf5 1262a7e4c19e]
	I1001 12:31:59.679463    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:31:59.690494    4242 logs.go:276] 1 containers: [f1ff198f5b54]
	I1001 12:31:59.690575    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:31:59.701656    4242 logs.go:276] 2 containers: [b0fc6eb4a300 8f22eeb55450]
	I1001 12:31:59.701743    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:31:59.712821    4242 logs.go:276] 1 containers: [696dee0aa95d]
	I1001 12:31:59.712902    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:31:59.726810    4242 logs.go:276] 2 containers: [13357b660e39 85f3a613a166]
	I1001 12:31:59.726889    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:31:59.737280    4242 logs.go:276] 0 containers: []
	W1001 12:31:59.737291    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:31:59.737360    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:31:59.748007    4242 logs.go:276] 2 containers: [992f91ff2f53 9729c1a1e22d]
	I1001 12:31:59.748026    4242 logs.go:123] Gathering logs for kube-controller-manager [85f3a613a166] ...
	I1001 12:31:59.748031    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85f3a613a166"
	I1001 12:31:59.761638    4242 logs.go:123] Gathering logs for storage-provisioner [9729c1a1e22d] ...
	I1001 12:31:59.761651    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9729c1a1e22d"
	I1001 12:31:59.773215    4242 logs.go:123] Gathering logs for kube-proxy [696dee0aa95d] ...
	I1001 12:31:59.773228    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 696dee0aa95d"
	I1001 12:31:59.785265    4242 logs.go:123] Gathering logs for kube-controller-manager [13357b660e39] ...
	I1001 12:31:59.785274    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13357b660e39"
	I1001 12:31:59.804157    4242 logs.go:123] Gathering logs for kube-scheduler [b0fc6eb4a300] ...
	I1001 12:31:59.804173    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fc6eb4a300"
	I1001 12:31:59.816494    4242 logs.go:123] Gathering logs for kube-scheduler [8f22eeb55450] ...
	I1001 12:31:59.816506    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f22eeb55450"
	I1001 12:31:59.833421    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:31:59.833433    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:31:59.857589    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:31:59.857600    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:31:59.869834    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:31:59.869850    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:31:59.907543    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:31:59.907556    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:31:59.946067    4242 logs.go:123] Gathering logs for etcd [1262a7e4c19e] ...
	I1001 12:31:59.946082    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1262a7e4c19e"
	I1001 12:31:59.962504    4242 logs.go:123] Gathering logs for coredns [f1ff198f5b54] ...
	I1001 12:31:59.962522    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ff198f5b54"
	I1001 12:31:59.977510    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:31:59.977526    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:31:59.982928    4242 logs.go:123] Gathering logs for kube-apiserver [c470955dfaae] ...
	I1001 12:31:59.982940    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c470955dfaae"
	I1001 12:31:59.998511    4242 logs.go:123] Gathering logs for storage-provisioner [992f91ff2f53] ...
	I1001 12:31:59.998527    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992f91ff2f53"
	I1001 12:32:00.010742    4242 logs.go:123] Gathering logs for kube-apiserver [fbe4eddea511] ...
	I1001 12:32:00.010757    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe4eddea511"
	I1001 12:32:00.030986    4242 logs.go:123] Gathering logs for etcd [5b9e36bfadf5] ...
	I1001 12:32:00.031000    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b9e36bfadf5"
	I1001 12:32:02.548132    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:32:07.550381    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:32:07.550766    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:32:07.582295    4242 logs.go:276] 2 containers: [c470955dfaae fbe4eddea511]
	I1001 12:32:07.582449    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:32:07.600200    4242 logs.go:276] 2 containers: [5b9e36bfadf5 1262a7e4c19e]
	I1001 12:32:07.600312    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:32:07.613207    4242 logs.go:276] 1 containers: [f1ff198f5b54]
	I1001 12:32:07.613305    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:32:07.626486    4242 logs.go:276] 2 containers: [b0fc6eb4a300 8f22eeb55450]
	I1001 12:32:07.626576    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:32:07.636869    4242 logs.go:276] 1 containers: [696dee0aa95d]
	I1001 12:32:07.636949    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:32:07.647202    4242 logs.go:276] 2 containers: [13357b660e39 85f3a613a166]
	I1001 12:32:07.647286    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:32:07.657246    4242 logs.go:276] 0 containers: []
	W1001 12:32:07.657257    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:32:07.657334    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:32:07.667867    4242 logs.go:276] 2 containers: [992f91ff2f53 9729c1a1e22d]
	I1001 12:32:07.667887    4242 logs.go:123] Gathering logs for etcd [5b9e36bfadf5] ...
	I1001 12:32:07.667892    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b9e36bfadf5"
	I1001 12:32:07.684875    4242 logs.go:123] Gathering logs for coredns [f1ff198f5b54] ...
	I1001 12:32:07.684884    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ff198f5b54"
	I1001 12:32:07.697691    4242 logs.go:123] Gathering logs for kube-proxy [696dee0aa95d] ...
	I1001 12:32:07.697702    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 696dee0aa95d"
	I1001 12:32:07.709257    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:32:07.709272    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:32:07.733684    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:32:07.733694    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:32:07.769933    4242 logs.go:123] Gathering logs for kube-apiserver [fbe4eddea511] ...
	I1001 12:32:07.769941    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe4eddea511"
	I1001 12:32:07.789469    4242 logs.go:123] Gathering logs for storage-provisioner [992f91ff2f53] ...
	I1001 12:32:07.789481    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992f91ff2f53"
	I1001 12:32:07.800719    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:32:07.800732    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:32:07.812723    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:32:07.812734    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:32:07.817078    4242 logs.go:123] Gathering logs for kube-apiserver [c470955dfaae] ...
	I1001 12:32:07.817087    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c470955dfaae"
	I1001 12:32:07.830815    4242 logs.go:123] Gathering logs for kube-scheduler [b0fc6eb4a300] ...
	I1001 12:32:07.830827    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fc6eb4a300"
	I1001 12:32:07.842812    4242 logs.go:123] Gathering logs for kube-controller-manager [85f3a613a166] ...
	I1001 12:32:07.842825    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85f3a613a166"
	I1001 12:32:07.856928    4242 logs.go:123] Gathering logs for storage-provisioner [9729c1a1e22d] ...
	I1001 12:32:07.856943    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9729c1a1e22d"
	I1001 12:32:07.868058    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:32:07.868070    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:32:07.904509    4242 logs.go:123] Gathering logs for etcd [1262a7e4c19e] ...
	I1001 12:32:07.904522    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1262a7e4c19e"
	I1001 12:32:07.919202    4242 logs.go:123] Gathering logs for kube-scheduler [8f22eeb55450] ...
	I1001 12:32:07.919216    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f22eeb55450"
	I1001 12:32:07.942260    4242 logs.go:123] Gathering logs for kube-controller-manager [13357b660e39] ...
	I1001 12:32:07.942273    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13357b660e39"
	I1001 12:32:10.463865    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:32:15.466024    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:32:15.466206    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:32:15.480053    4242 logs.go:276] 2 containers: [c470955dfaae fbe4eddea511]
	I1001 12:32:15.480138    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:32:15.493242    4242 logs.go:276] 2 containers: [5b9e36bfadf5 1262a7e4c19e]
	I1001 12:32:15.493330    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:32:15.503838    4242 logs.go:276] 1 containers: [f1ff198f5b54]
	I1001 12:32:15.503919    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:32:15.514517    4242 logs.go:276] 2 containers: [b0fc6eb4a300 8f22eeb55450]
	I1001 12:32:15.514605    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:32:15.525048    4242 logs.go:276] 1 containers: [696dee0aa95d]
	I1001 12:32:15.525137    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:32:15.535781    4242 logs.go:276] 2 containers: [13357b660e39 85f3a613a166]
	I1001 12:32:15.535864    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:32:15.546157    4242 logs.go:276] 0 containers: []
	W1001 12:32:15.546169    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:32:15.546241    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:32:15.556897    4242 logs.go:276] 2 containers: [992f91ff2f53 9729c1a1e22d]
	I1001 12:32:15.556917    4242 logs.go:123] Gathering logs for coredns [f1ff198f5b54] ...
	I1001 12:32:15.556926    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ff198f5b54"
	I1001 12:32:15.567893    4242 logs.go:123] Gathering logs for kube-scheduler [b0fc6eb4a300] ...
	I1001 12:32:15.567906    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fc6eb4a300"
	I1001 12:32:15.579540    4242 logs.go:123] Gathering logs for storage-provisioner [992f91ff2f53] ...
	I1001 12:32:15.579549    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992f91ff2f53"
	I1001 12:32:15.590995    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:32:15.591007    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:32:15.614330    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:32:15.614341    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:32:15.618964    4242 logs.go:123] Gathering logs for kube-apiserver [c470955dfaae] ...
	I1001 12:32:15.618971    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c470955dfaae"
	I1001 12:32:15.633654    4242 logs.go:123] Gathering logs for kube-controller-manager [85f3a613a166] ...
	I1001 12:32:15.633669    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85f3a613a166"
	I1001 12:32:15.646543    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:32:15.646555    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:32:15.687975    4242 logs.go:123] Gathering logs for etcd [5b9e36bfadf5] ...
	I1001 12:32:15.687988    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b9e36bfadf5"
	I1001 12:32:15.704301    4242 logs.go:123] Gathering logs for etcd [1262a7e4c19e] ...
	I1001 12:32:15.704315    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1262a7e4c19e"
	I1001 12:32:15.718792    4242 logs.go:123] Gathering logs for kube-scheduler [8f22eeb55450] ...
	I1001 12:32:15.718807    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f22eeb55450"
	I1001 12:32:15.735179    4242 logs.go:123] Gathering logs for kube-proxy [696dee0aa95d] ...
	I1001 12:32:15.735192    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 696dee0aa95d"
	I1001 12:32:15.746936    4242 logs.go:123] Gathering logs for kube-controller-manager [13357b660e39] ...
	I1001 12:32:15.746948    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13357b660e39"
	I1001 12:32:15.764187    4242 logs.go:123] Gathering logs for storage-provisioner [9729c1a1e22d] ...
	I1001 12:32:15.764197    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9729c1a1e22d"
	I1001 12:32:15.775355    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:32:15.775370    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:32:15.787054    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:32:15.787068    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:32:15.825245    4242 logs.go:123] Gathering logs for kube-apiserver [fbe4eddea511] ...
	I1001 12:32:15.825254    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe4eddea511"
	I1001 12:32:18.356378    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:32:23.358780    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:32:23.358990    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:32:23.370733    4242 logs.go:276] 2 containers: [c470955dfaae fbe4eddea511]
	I1001 12:32:23.370825    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:32:23.386121    4242 logs.go:276] 2 containers: [5b9e36bfadf5 1262a7e4c19e]
	I1001 12:32:23.386211    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:32:23.400330    4242 logs.go:276] 1 containers: [f1ff198f5b54]
	I1001 12:32:23.400414    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:32:23.411125    4242 logs.go:276] 2 containers: [b0fc6eb4a300 8f22eeb55450]
	I1001 12:32:23.411210    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:32:23.421622    4242 logs.go:276] 1 containers: [696dee0aa95d]
	I1001 12:32:23.421703    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:32:23.432530    4242 logs.go:276] 2 containers: [13357b660e39 85f3a613a166]
	I1001 12:32:23.432614    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:32:23.444500    4242 logs.go:276] 0 containers: []
	W1001 12:32:23.444511    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:32:23.444581    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:32:23.455891    4242 logs.go:276] 2 containers: [992f91ff2f53 9729c1a1e22d]
	I1001 12:32:23.455910    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:32:23.455916    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:32:23.468029    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:32:23.468040    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:32:23.504062    4242 logs.go:123] Gathering logs for kube-controller-manager [13357b660e39] ...
	I1001 12:32:23.504072    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13357b660e39"
	I1001 12:32:23.521191    4242 logs.go:123] Gathering logs for coredns [f1ff198f5b54] ...
	I1001 12:32:23.521202    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ff198f5b54"
	I1001 12:32:23.533124    4242 logs.go:123] Gathering logs for storage-provisioner [9729c1a1e22d] ...
	I1001 12:32:23.533136    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9729c1a1e22d"
	I1001 12:32:23.544275    4242 logs.go:123] Gathering logs for kube-apiserver [fbe4eddea511] ...
	I1001 12:32:23.544287    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe4eddea511"
	I1001 12:32:23.563759    4242 logs.go:123] Gathering logs for etcd [5b9e36bfadf5] ...
	I1001 12:32:23.563774    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b9e36bfadf5"
	I1001 12:32:23.577667    4242 logs.go:123] Gathering logs for kube-controller-manager [85f3a613a166] ...
	I1001 12:32:23.577683    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85f3a613a166"
	I1001 12:32:23.590922    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:32:23.590934    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:32:23.615872    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:32:23.615880    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:32:23.623378    4242 logs.go:123] Gathering logs for kube-proxy [696dee0aa95d] ...
	I1001 12:32:23.623386    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 696dee0aa95d"
	I1001 12:32:23.635624    4242 logs.go:123] Gathering logs for etcd [1262a7e4c19e] ...
	I1001 12:32:23.635635    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1262a7e4c19e"
	I1001 12:32:23.650493    4242 logs.go:123] Gathering logs for kube-scheduler [b0fc6eb4a300] ...
	I1001 12:32:23.650503    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fc6eb4a300"
	I1001 12:32:23.662689    4242 logs.go:123] Gathering logs for kube-scheduler [8f22eeb55450] ...
	I1001 12:32:23.662701    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f22eeb55450"
	I1001 12:32:23.682797    4242 logs.go:123] Gathering logs for storage-provisioner [992f91ff2f53] ...
	I1001 12:32:23.682807    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992f91ff2f53"
	I1001 12:32:23.694958    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:32:23.694968    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:32:23.734989    4242 logs.go:123] Gathering logs for kube-apiserver [c470955dfaae] ...
	I1001 12:32:23.735006    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c470955dfaae"
	I1001 12:32:26.252176    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:32:31.253299    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:32:31.253424    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:32:31.264656    4242 logs.go:276] 2 containers: [c470955dfaae fbe4eddea511]
	I1001 12:32:31.264738    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:32:31.283156    4242 logs.go:276] 2 containers: [5b9e36bfadf5 1262a7e4c19e]
	I1001 12:32:31.283265    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:32:31.294322    4242 logs.go:276] 1 containers: [f1ff198f5b54]
	I1001 12:32:31.294407    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:32:31.305591    4242 logs.go:276] 2 containers: [b0fc6eb4a300 8f22eeb55450]
	I1001 12:32:31.305673    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:32:31.316532    4242 logs.go:276] 1 containers: [696dee0aa95d]
	I1001 12:32:31.316614    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:32:31.328155    4242 logs.go:276] 2 containers: [13357b660e39 85f3a613a166]
	I1001 12:32:31.328239    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:32:31.338454    4242 logs.go:276] 0 containers: []
	W1001 12:32:31.338467    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:32:31.338540    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:32:31.349325    4242 logs.go:276] 2 containers: [992f91ff2f53 9729c1a1e22d]
	I1001 12:32:31.349348    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:32:31.349354    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:32:31.354333    4242 logs.go:123] Gathering logs for etcd [5b9e36bfadf5] ...
	I1001 12:32:31.354341    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b9e36bfadf5"
	I1001 12:32:31.374490    4242 logs.go:123] Gathering logs for kube-scheduler [b0fc6eb4a300] ...
	I1001 12:32:31.374501    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fc6eb4a300"
	I1001 12:32:31.388786    4242 logs.go:123] Gathering logs for kube-controller-manager [13357b660e39] ...
	I1001 12:32:31.388797    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13357b660e39"
	I1001 12:32:31.407130    4242 logs.go:123] Gathering logs for storage-provisioner [9729c1a1e22d] ...
	I1001 12:32:31.407146    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9729c1a1e22d"
	I1001 12:32:31.418744    4242 logs.go:123] Gathering logs for kube-apiserver [c470955dfaae] ...
	I1001 12:32:31.418759    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c470955dfaae"
	I1001 12:32:31.433722    4242 logs.go:123] Gathering logs for etcd [1262a7e4c19e] ...
	I1001 12:32:31.433735    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1262a7e4c19e"
	I1001 12:32:31.450893    4242 logs.go:123] Gathering logs for coredns [f1ff198f5b54] ...
	I1001 12:32:31.450904    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ff198f5b54"
	I1001 12:32:31.462595    4242 logs.go:123] Gathering logs for kube-proxy [696dee0aa95d] ...
	I1001 12:32:31.462607    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 696dee0aa95d"
	I1001 12:32:31.481359    4242 logs.go:123] Gathering logs for kube-controller-manager [85f3a613a166] ...
	I1001 12:32:31.481371    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85f3a613a166"
	I1001 12:32:31.501382    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:32:31.501398    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:32:31.513713    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:32:31.513725    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:32:31.555197    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:32:31.555216    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:32:31.594741    4242 logs.go:123] Gathering logs for kube-apiserver [fbe4eddea511] ...
	I1001 12:32:31.594754    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe4eddea511"
	I1001 12:32:31.615723    4242 logs.go:123] Gathering logs for kube-scheduler [8f22eeb55450] ...
	I1001 12:32:31.615737    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f22eeb55450"
	I1001 12:32:31.636758    4242 logs.go:123] Gathering logs for storage-provisioner [992f91ff2f53] ...
	I1001 12:32:31.636771    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992f91ff2f53"
	I1001 12:32:31.648830    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:32:31.648842    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:32:34.176492    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:32:39.179208    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:32:39.179686    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:32:39.212320    4242 logs.go:276] 2 containers: [c470955dfaae fbe4eddea511]
	I1001 12:32:39.212479    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:32:39.232698    4242 logs.go:276] 2 containers: [5b9e36bfadf5 1262a7e4c19e]
	I1001 12:32:39.232817    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:32:39.247631    4242 logs.go:276] 1 containers: [f1ff198f5b54]
	I1001 12:32:39.247726    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:32:39.260446    4242 logs.go:276] 2 containers: [b0fc6eb4a300 8f22eeb55450]
	I1001 12:32:39.260533    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:32:39.271192    4242 logs.go:276] 1 containers: [696dee0aa95d]
	I1001 12:32:39.271272    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:32:39.282373    4242 logs.go:276] 2 containers: [13357b660e39 85f3a613a166]
	I1001 12:32:39.282448    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:32:39.292661    4242 logs.go:276] 0 containers: []
	W1001 12:32:39.292672    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:32:39.292737    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:32:39.303302    4242 logs.go:276] 2 containers: [992f91ff2f53 9729c1a1e22d]
	I1001 12:32:39.303320    4242 logs.go:123] Gathering logs for kube-apiserver [c470955dfaae] ...
	I1001 12:32:39.303326    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c470955dfaae"
	I1001 12:32:39.319396    4242 logs.go:123] Gathering logs for etcd [5b9e36bfadf5] ...
	I1001 12:32:39.319406    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b9e36bfadf5"
	I1001 12:32:39.333094    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:32:39.333105    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:32:39.356762    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:32:39.356773    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:32:39.391639    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:32:39.391649    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:32:39.395753    4242 logs.go:123] Gathering logs for kube-apiserver [fbe4eddea511] ...
	I1001 12:32:39.395761    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe4eddea511"
	I1001 12:32:39.414726    4242 logs.go:123] Gathering logs for etcd [1262a7e4c19e] ...
	I1001 12:32:39.414740    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1262a7e4c19e"
	I1001 12:32:39.429911    4242 logs.go:123] Gathering logs for kube-controller-manager [13357b660e39] ...
	I1001 12:32:39.429922    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13357b660e39"
	I1001 12:32:39.452983    4242 logs.go:123] Gathering logs for storage-provisioner [9729c1a1e22d] ...
	I1001 12:32:39.452997    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9729c1a1e22d"
	I1001 12:32:39.464741    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:32:39.464752    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:32:39.477527    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:32:39.477543    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:32:39.515321    4242 logs.go:123] Gathering logs for kube-scheduler [b0fc6eb4a300] ...
	I1001 12:32:39.515335    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fc6eb4a300"
	I1001 12:32:39.527457    4242 logs.go:123] Gathering logs for kube-scheduler [8f22eeb55450] ...
	I1001 12:32:39.527468    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f22eeb55450"
	I1001 12:32:39.543835    4242 logs.go:123] Gathering logs for kube-proxy [696dee0aa95d] ...
	I1001 12:32:39.543846    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 696dee0aa95d"
	I1001 12:32:39.565106    4242 logs.go:123] Gathering logs for kube-controller-manager [85f3a613a166] ...
	I1001 12:32:39.565122    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85f3a613a166"
	I1001 12:32:39.579121    4242 logs.go:123] Gathering logs for coredns [f1ff198f5b54] ...
	I1001 12:32:39.579134    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ff198f5b54"
	I1001 12:32:39.591122    4242 logs.go:123] Gathering logs for storage-provisioner [992f91ff2f53] ...
	I1001 12:32:39.591135    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992f91ff2f53"
	I1001 12:32:42.103343    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:32:47.105555    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:32:47.105721    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:32:47.127079    4242 logs.go:276] 2 containers: [c470955dfaae fbe4eddea511]
	I1001 12:32:47.127148    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:32:47.138384    4242 logs.go:276] 2 containers: [5b9e36bfadf5 1262a7e4c19e]
	I1001 12:32:47.138444    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:32:47.149937    4242 logs.go:276] 1 containers: [f1ff198f5b54]
	I1001 12:32:47.150010    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:32:47.161606    4242 logs.go:276] 2 containers: [b0fc6eb4a300 8f22eeb55450]
	I1001 12:32:47.161665    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:32:47.173418    4242 logs.go:276] 1 containers: [696dee0aa95d]
	I1001 12:32:47.173482    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:32:47.185114    4242 logs.go:276] 2 containers: [13357b660e39 85f3a613a166]
	I1001 12:32:47.185176    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:32:47.196305    4242 logs.go:276] 0 containers: []
	W1001 12:32:47.196315    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:32:47.196382    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:32:47.207805    4242 logs.go:276] 2 containers: [992f91ff2f53 9729c1a1e22d]
	I1001 12:32:47.207820    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:32:47.207826    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:32:47.253638    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:32:47.253648    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:32:47.258352    4242 logs.go:123] Gathering logs for kube-apiserver [c470955dfaae] ...
	I1001 12:32:47.258361    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c470955dfaae"
	I1001 12:32:47.273264    4242 logs.go:123] Gathering logs for kube-controller-manager [85f3a613a166] ...
	I1001 12:32:47.273274    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85f3a613a166"
	I1001 12:32:47.287130    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:32:47.287146    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:32:47.299955    4242 logs.go:123] Gathering logs for storage-provisioner [992f91ff2f53] ...
	I1001 12:32:47.299963    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992f91ff2f53"
	I1001 12:32:47.312504    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:32:47.312517    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:32:47.348858    4242 logs.go:123] Gathering logs for kube-apiserver [fbe4eddea511] ...
	I1001 12:32:47.348875    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe4eddea511"
	I1001 12:32:47.370349    4242 logs.go:123] Gathering logs for etcd [5b9e36bfadf5] ...
	I1001 12:32:47.370365    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b9e36bfadf5"
	I1001 12:32:47.385655    4242 logs.go:123] Gathering logs for kube-scheduler [b0fc6eb4a300] ...
	I1001 12:32:47.385668    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fc6eb4a300"
	I1001 12:32:47.397695    4242 logs.go:123] Gathering logs for kube-scheduler [8f22eeb55450] ...
	I1001 12:32:47.397706    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f22eeb55450"
	I1001 12:32:47.414533    4242 logs.go:123] Gathering logs for kube-proxy [696dee0aa95d] ...
	I1001 12:32:47.414551    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 696dee0aa95d"
	I1001 12:32:47.426837    4242 logs.go:123] Gathering logs for kube-controller-manager [13357b660e39] ...
	I1001 12:32:47.426852    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13357b660e39"
	I1001 12:32:47.446744    4242 logs.go:123] Gathering logs for etcd [1262a7e4c19e] ...
	I1001 12:32:47.446760    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1262a7e4c19e"
	I1001 12:32:47.461930    4242 logs.go:123] Gathering logs for coredns [f1ff198f5b54] ...
	I1001 12:32:47.461947    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ff198f5b54"
	I1001 12:32:47.474274    4242 logs.go:123] Gathering logs for storage-provisioner [9729c1a1e22d] ...
	I1001 12:32:47.474286    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9729c1a1e22d"
	I1001 12:32:47.486064    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:32:47.486076    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:32:50.011845    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:32:55.014265    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:32:55.014814    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:32:55.055174    4242 logs.go:276] 2 containers: [c470955dfaae fbe4eddea511]
	I1001 12:32:55.055359    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:32:55.076737    4242 logs.go:276] 2 containers: [5b9e36bfadf5 1262a7e4c19e]
	I1001 12:32:55.076879    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:32:55.091627    4242 logs.go:276] 1 containers: [f1ff198f5b54]
	I1001 12:32:55.091720    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:32:55.104520    4242 logs.go:276] 2 containers: [b0fc6eb4a300 8f22eeb55450]
	I1001 12:32:55.104609    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:32:55.119528    4242 logs.go:276] 1 containers: [696dee0aa95d]
	I1001 12:32:55.119615    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:32:55.130516    4242 logs.go:276] 2 containers: [13357b660e39 85f3a613a166]
	I1001 12:32:55.130595    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:32:55.140077    4242 logs.go:276] 0 containers: []
	W1001 12:32:55.140088    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:32:55.140158    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:32:55.151457    4242 logs.go:276] 2 containers: [992f91ff2f53 9729c1a1e22d]
	I1001 12:32:55.151479    4242 logs.go:123] Gathering logs for etcd [5b9e36bfadf5] ...
	I1001 12:32:55.151485    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b9e36bfadf5"
	I1001 12:32:55.166159    4242 logs.go:123] Gathering logs for kube-scheduler [b0fc6eb4a300] ...
	I1001 12:32:55.166170    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fc6eb4a300"
	I1001 12:32:55.178458    4242 logs.go:123] Gathering logs for kube-controller-manager [13357b660e39] ...
	I1001 12:32:55.178469    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13357b660e39"
	I1001 12:32:55.196519    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:32:55.196536    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:32:55.209070    4242 logs.go:123] Gathering logs for kube-apiserver [fbe4eddea511] ...
	I1001 12:32:55.209082    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe4eddea511"
	I1001 12:32:55.229598    4242 logs.go:123] Gathering logs for coredns [f1ff198f5b54] ...
	I1001 12:32:55.229613    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ff198f5b54"
	I1001 12:32:55.241141    4242 logs.go:123] Gathering logs for kube-proxy [696dee0aa95d] ...
	I1001 12:32:55.241155    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 696dee0aa95d"
	I1001 12:32:55.252922    4242 logs.go:123] Gathering logs for storage-provisioner [9729c1a1e22d] ...
	I1001 12:32:55.252935    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9729c1a1e22d"
	I1001 12:32:55.264195    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:32:55.264207    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:32:55.286302    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:32:55.286309    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:32:55.322106    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:32:55.322122    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:32:55.357852    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:32:55.357866    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:32:55.362177    4242 logs.go:123] Gathering logs for kube-apiserver [c470955dfaae] ...
	I1001 12:32:55.362186    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c470955dfaae"
	I1001 12:32:55.376070    4242 logs.go:123] Gathering logs for etcd [1262a7e4c19e] ...
	I1001 12:32:55.376080    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1262a7e4c19e"
	I1001 12:32:55.390059    4242 logs.go:123] Gathering logs for kube-scheduler [8f22eeb55450] ...
	I1001 12:32:55.390073    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f22eeb55450"
	I1001 12:32:55.408397    4242 logs.go:123] Gathering logs for kube-controller-manager [85f3a613a166] ...
	I1001 12:32:55.408407    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85f3a613a166"
	I1001 12:32:55.446229    4242 logs.go:123] Gathering logs for storage-provisioner [992f91ff2f53] ...
	I1001 12:32:55.446254    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992f91ff2f53"
	I1001 12:32:57.970755    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:33:02.973400    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:33:02.973926    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:33:03.013616    4242 logs.go:276] 2 containers: [c470955dfaae fbe4eddea511]
	I1001 12:33:03.013795    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:33:03.034980    4242 logs.go:276] 2 containers: [5b9e36bfadf5 1262a7e4c19e]
	I1001 12:33:03.035102    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:33:03.050725    4242 logs.go:276] 1 containers: [f1ff198f5b54]
	I1001 12:33:03.050829    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:33:03.062803    4242 logs.go:276] 2 containers: [b0fc6eb4a300 8f22eeb55450]
	I1001 12:33:03.062902    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:33:03.073914    4242 logs.go:276] 1 containers: [696dee0aa95d]
	I1001 12:33:03.074004    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:33:03.088727    4242 logs.go:276] 2 containers: [13357b660e39 85f3a613a166]
	I1001 12:33:03.088837    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:33:03.099144    4242 logs.go:276] 0 containers: []
	W1001 12:33:03.099155    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:33:03.099232    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:33:03.110031    4242 logs.go:276] 2 containers: [992f91ff2f53 9729c1a1e22d]
	I1001 12:33:03.110047    4242 logs.go:123] Gathering logs for kube-apiserver [c470955dfaae] ...
	I1001 12:33:03.110053    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c470955dfaae"
	I1001 12:33:03.124039    4242 logs.go:123] Gathering logs for coredns [f1ff198f5b54] ...
	I1001 12:33:03.124049    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ff198f5b54"
	I1001 12:33:03.135426    4242 logs.go:123] Gathering logs for kube-scheduler [b0fc6eb4a300] ...
	I1001 12:33:03.135438    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fc6eb4a300"
	I1001 12:33:03.146693    4242 logs.go:123] Gathering logs for etcd [1262a7e4c19e] ...
	I1001 12:33:03.146702    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1262a7e4c19e"
	I1001 12:33:03.165130    4242 logs.go:123] Gathering logs for kube-scheduler [8f22eeb55450] ...
	I1001 12:33:03.165142    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f22eeb55450"
	I1001 12:33:03.182340    4242 logs.go:123] Gathering logs for kube-controller-manager [13357b660e39] ...
	I1001 12:33:03.182352    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13357b660e39"
	I1001 12:33:03.200164    4242 logs.go:123] Gathering logs for kube-controller-manager [85f3a613a166] ...
	I1001 12:33:03.200176    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85f3a613a166"
	I1001 12:33:03.212476    4242 logs.go:123] Gathering logs for storage-provisioner [992f91ff2f53] ...
	I1001 12:33:03.212487    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992f91ff2f53"
	I1001 12:33:03.224104    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:33:03.224113    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:33:03.228929    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:33:03.228936    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:33:03.262739    4242 logs.go:123] Gathering logs for kube-proxy [696dee0aa95d] ...
	I1001 12:33:03.262751    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 696dee0aa95d"
	I1001 12:33:03.274450    4242 logs.go:123] Gathering logs for storage-provisioner [9729c1a1e22d] ...
	I1001 12:33:03.274459    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9729c1a1e22d"
	I1001 12:33:03.285955    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:33:03.285968    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:33:03.298667    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:33:03.298678    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:33:03.334293    4242 logs.go:123] Gathering logs for kube-apiserver [fbe4eddea511] ...
	I1001 12:33:03.334305    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe4eddea511"
	I1001 12:33:03.354175    4242 logs.go:123] Gathering logs for etcd [5b9e36bfadf5] ...
	I1001 12:33:03.354190    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b9e36bfadf5"
	I1001 12:33:03.375136    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:33:03.375147    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:33:05.900477    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:33:10.902757    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:33:10.903376    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:33:10.944431    4242 logs.go:276] 2 containers: [c470955dfaae fbe4eddea511]
	I1001 12:33:10.944599    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:33:10.968876    4242 logs.go:276] 2 containers: [5b9e36bfadf5 1262a7e4c19e]
	I1001 12:33:10.969001    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:33:10.983934    4242 logs.go:276] 1 containers: [f1ff198f5b54]
	I1001 12:33:10.984029    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:33:10.996703    4242 logs.go:276] 2 containers: [b0fc6eb4a300 8f22eeb55450]
	I1001 12:33:10.996795    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:33:11.007901    4242 logs.go:276] 1 containers: [696dee0aa95d]
	I1001 12:33:11.007983    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:33:11.018722    4242 logs.go:276] 2 containers: [13357b660e39 85f3a613a166]
	I1001 12:33:11.018806    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:33:11.029267    4242 logs.go:276] 0 containers: []
	W1001 12:33:11.029283    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:33:11.029353    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:33:11.040343    4242 logs.go:276] 2 containers: [992f91ff2f53 9729c1a1e22d]
	I1001 12:33:11.040366    4242 logs.go:123] Gathering logs for kube-apiserver [fbe4eddea511] ...
	I1001 12:33:11.040372    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe4eddea511"
	I1001 12:33:11.059872    4242 logs.go:123] Gathering logs for etcd [1262a7e4c19e] ...
	I1001 12:33:11.059888    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1262a7e4c19e"
	I1001 12:33:11.074664    4242 logs.go:123] Gathering logs for kube-scheduler [b0fc6eb4a300] ...
	I1001 12:33:11.074675    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fc6eb4a300"
	I1001 12:33:11.086562    4242 logs.go:123] Gathering logs for kube-controller-manager [85f3a613a166] ...
	I1001 12:33:11.086571    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85f3a613a166"
	I1001 12:33:11.099094    4242 logs.go:123] Gathering logs for kube-controller-manager [13357b660e39] ...
	I1001 12:33:11.099103    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13357b660e39"
	I1001 12:33:11.118672    4242 logs.go:123] Gathering logs for storage-provisioner [9729c1a1e22d] ...
	I1001 12:33:11.118688    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9729c1a1e22d"
	I1001 12:33:11.140684    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:33:11.140696    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:33:11.145454    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:33:11.145461    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:33:11.180980    4242 logs.go:123] Gathering logs for kube-apiserver [c470955dfaae] ...
	I1001 12:33:11.180993    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c470955dfaae"
	I1001 12:33:11.195628    4242 logs.go:123] Gathering logs for storage-provisioner [992f91ff2f53] ...
	I1001 12:33:11.195638    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992f91ff2f53"
	I1001 12:33:11.207205    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:33:11.207215    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:33:11.219056    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:33:11.219068    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:33:11.241595    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:33:11.241605    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:33:11.276474    4242 logs.go:123] Gathering logs for etcd [5b9e36bfadf5] ...
	I1001 12:33:11.276489    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b9e36bfadf5"
	I1001 12:33:11.290983    4242 logs.go:123] Gathering logs for coredns [f1ff198f5b54] ...
	I1001 12:33:11.290998    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ff198f5b54"
	I1001 12:33:11.302756    4242 logs.go:123] Gathering logs for kube-scheduler [8f22eeb55450] ...
	I1001 12:33:11.302773    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f22eeb55450"
	I1001 12:33:11.321424    4242 logs.go:123] Gathering logs for kube-proxy [696dee0aa95d] ...
	I1001 12:33:11.321435    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 696dee0aa95d"
	I1001 12:33:13.835724    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:33:18.838069    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:33:18.838700    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:33:18.863553    4242 logs.go:276] 2 containers: [c470955dfaae fbe4eddea511]
	I1001 12:33:18.863666    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:33:18.878492    4242 logs.go:276] 2 containers: [5b9e36bfadf5 1262a7e4c19e]
	I1001 12:33:18.878577    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:33:18.890241    4242 logs.go:276] 1 containers: [f1ff198f5b54]
	I1001 12:33:18.890324    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:33:18.900722    4242 logs.go:276] 2 containers: [b0fc6eb4a300 8f22eeb55450]
	I1001 12:33:18.900808    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:33:18.911118    4242 logs.go:276] 1 containers: [696dee0aa95d]
	I1001 12:33:18.911195    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:33:18.921909    4242 logs.go:276] 2 containers: [13357b660e39 85f3a613a166]
	I1001 12:33:18.921980    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:33:18.932116    4242 logs.go:276] 0 containers: []
	W1001 12:33:18.932135    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:33:18.932202    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:33:18.943254    4242 logs.go:276] 2 containers: [992f91ff2f53 9729c1a1e22d]
	I1001 12:33:18.943273    4242 logs.go:123] Gathering logs for kube-scheduler [b0fc6eb4a300] ...
	I1001 12:33:18.943279    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fc6eb4a300"
	I1001 12:33:18.954552    4242 logs.go:123] Gathering logs for storage-provisioner [9729c1a1e22d] ...
	I1001 12:33:18.954562    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9729c1a1e22d"
	I1001 12:33:18.965943    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:33:18.965955    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:33:19.001839    4242 logs.go:123] Gathering logs for kube-apiserver [c470955dfaae] ...
	I1001 12:33:19.001850    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c470955dfaae"
	I1001 12:33:19.015879    4242 logs.go:123] Gathering logs for storage-provisioner [992f91ff2f53] ...
	I1001 12:33:19.015893    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992f91ff2f53"
	I1001 12:33:19.027224    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:33:19.027237    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:33:19.048747    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:33:19.048753    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:33:19.053426    4242 logs.go:123] Gathering logs for kube-scheduler [8f22eeb55450] ...
	I1001 12:33:19.053434    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f22eeb55450"
	I1001 12:33:19.069603    4242 logs.go:123] Gathering logs for kube-proxy [696dee0aa95d] ...
	I1001 12:33:19.069615    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 696dee0aa95d"
	I1001 12:33:19.081493    4242 logs.go:123] Gathering logs for kube-controller-manager [13357b660e39] ...
	I1001 12:33:19.081503    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13357b660e39"
	I1001 12:33:19.098580    4242 logs.go:123] Gathering logs for kube-controller-manager [85f3a613a166] ...
	I1001 12:33:19.098592    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85f3a613a166"
	I1001 12:33:19.111887    4242 logs.go:123] Gathering logs for coredns [f1ff198f5b54] ...
	I1001 12:33:19.111899    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ff198f5b54"
	I1001 12:33:19.127201    4242 logs.go:123] Gathering logs for kube-apiserver [fbe4eddea511] ...
	I1001 12:33:19.127214    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe4eddea511"
	I1001 12:33:19.153836    4242 logs.go:123] Gathering logs for etcd [5b9e36bfadf5] ...
	I1001 12:33:19.153847    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b9e36bfadf5"
	I1001 12:33:19.167962    4242 logs.go:123] Gathering logs for etcd [1262a7e4c19e] ...
	I1001 12:33:19.167972    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1262a7e4c19e"
	I1001 12:33:19.182249    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:33:19.182257    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:33:19.193765    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:33:19.193779    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:33:21.733377    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:33:26.735591    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:33:26.735718    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:33:26.746973    4242 logs.go:276] 2 containers: [c470955dfaae fbe4eddea511]
	I1001 12:33:26.747060    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:33:26.758092    4242 logs.go:276] 2 containers: [5b9e36bfadf5 1262a7e4c19e]
	I1001 12:33:26.758171    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:33:26.768214    4242 logs.go:276] 1 containers: [f1ff198f5b54]
	I1001 12:33:26.768293    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:33:26.779200    4242 logs.go:276] 2 containers: [b0fc6eb4a300 8f22eeb55450]
	I1001 12:33:26.779284    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:33:26.791515    4242 logs.go:276] 1 containers: [696dee0aa95d]
	I1001 12:33:26.791592    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:33:26.801958    4242 logs.go:276] 2 containers: [13357b660e39 85f3a613a166]
	I1001 12:33:26.802039    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:33:26.812012    4242 logs.go:276] 0 containers: []
	W1001 12:33:26.812024    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:33:26.812090    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:33:26.822445    4242 logs.go:276] 2 containers: [992f91ff2f53 9729c1a1e22d]
	I1001 12:33:26.822463    4242 logs.go:123] Gathering logs for kube-scheduler [b0fc6eb4a300] ...
	I1001 12:33:26.822468    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fc6eb4a300"
	I1001 12:33:26.834711    4242 logs.go:123] Gathering logs for kube-proxy [696dee0aa95d] ...
	I1001 12:33:26.834722    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 696dee0aa95d"
	I1001 12:33:26.848489    4242 logs.go:123] Gathering logs for kube-controller-manager [13357b660e39] ...
	I1001 12:33:26.848500    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13357b660e39"
	I1001 12:33:26.865351    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:33:26.865363    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:33:26.877440    4242 logs.go:123] Gathering logs for coredns [f1ff198f5b54] ...
	I1001 12:33:26.877456    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ff198f5b54"
	I1001 12:33:26.888618    4242 logs.go:123] Gathering logs for etcd [1262a7e4c19e] ...
	I1001 12:33:26.888638    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1262a7e4c19e"
	I1001 12:33:26.903146    4242 logs.go:123] Gathering logs for kube-scheduler [8f22eeb55450] ...
	I1001 12:33:26.903157    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f22eeb55450"
	I1001 12:33:26.919172    4242 logs.go:123] Gathering logs for kube-controller-manager [85f3a613a166] ...
	I1001 12:33:26.919183    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85f3a613a166"
	I1001 12:33:26.932425    4242 logs.go:123] Gathering logs for kube-apiserver [fbe4eddea511] ...
	I1001 12:33:26.932436    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe4eddea511"
	I1001 12:33:26.952005    4242 logs.go:123] Gathering logs for storage-provisioner [992f91ff2f53] ...
	I1001 12:33:26.952018    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992f91ff2f53"
	I1001 12:33:26.963743    4242 logs.go:123] Gathering logs for etcd [5b9e36bfadf5] ...
	I1001 12:33:26.963758    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b9e36bfadf5"
	I1001 12:33:26.982037    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:33:26.982053    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:33:26.986407    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:33:26.986414    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:33:27.020455    4242 logs.go:123] Gathering logs for kube-apiserver [c470955dfaae] ...
	I1001 12:33:27.020471    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c470955dfaae"
	I1001 12:33:27.035261    4242 logs.go:123] Gathering logs for storage-provisioner [9729c1a1e22d] ...
	I1001 12:33:27.035278    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9729c1a1e22d"
	I1001 12:33:27.046911    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:33:27.046925    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:33:27.069761    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:33:27.069768    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:33:29.609064    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:33:34.610834    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:33:34.610995    4242 kubeadm.go:597] duration metric: took 4m4.020074875s to restartPrimaryControlPlane
	W1001 12:33:34.611149    4242 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1001 12:33:34.611218    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1001 12:33:35.668369    4242 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.0571585s)
	I1001 12:33:35.668464    4242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 12:33:35.673297    4242 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 12:33:35.676055    4242 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 12:33:35.678694    4242 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 12:33:35.678700    4242 kubeadm.go:157] found existing configuration files:
	
	I1001 12:33:35.678731    4242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50292 /etc/kubernetes/admin.conf
	I1001 12:33:35.681608    4242 kubeadm.go:163] "https://control-plane.minikube.internal:50292" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50292 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 12:33:35.681634    4242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 12:33:35.684960    4242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50292 /etc/kubernetes/kubelet.conf
	I1001 12:33:35.687755    4242 kubeadm.go:163] "https://control-plane.minikube.internal:50292" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50292 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 12:33:35.687787    4242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 12:33:35.690499    4242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50292 /etc/kubernetes/controller-manager.conf
	I1001 12:33:35.693626    4242 kubeadm.go:163] "https://control-plane.minikube.internal:50292" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50292 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 12:33:35.693653    4242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 12:33:35.696188    4242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50292 /etc/kubernetes/scheduler.conf
	I1001 12:33:35.698696    4242 kubeadm.go:163] "https://control-plane.minikube.internal:50292" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50292 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 12:33:35.698721    4242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 12:33:35.701702    4242 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1001 12:33:35.718827    4242 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1001 12:33:35.718876    4242 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 12:33:35.769314    4242 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 12:33:35.769367    4242 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 12:33:35.769420    4242 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1001 12:33:35.819657    4242 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 12:33:35.823872    4242 out.go:235]   - Generating certificates and keys ...
	I1001 12:33:35.823947    4242 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 12:33:35.824002    4242 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 12:33:35.824057    4242 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1001 12:33:35.824084    4242 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1001 12:33:35.824119    4242 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1001 12:33:35.824146    4242 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1001 12:33:35.824213    4242 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1001 12:33:35.824348    4242 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1001 12:33:35.824392    4242 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1001 12:33:35.824451    4242 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1001 12:33:35.824479    4242 kubeadm.go:310] [certs] Using the existing "sa" key
	I1001 12:33:35.824517    4242 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 12:33:35.899524    4242 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 12:33:36.083295    4242 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 12:33:36.116235    4242 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 12:33:36.150768    4242 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 12:33:36.180233    4242 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 12:33:36.180623    4242 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 12:33:36.180671    4242 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 12:33:36.251291    4242 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 12:33:36.259257    4242 out.go:235]   - Booting up control plane ...
	I1001 12:33:36.259390    4242 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 12:33:36.259440    4242 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 12:33:36.259505    4242 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 12:33:36.259551    4242 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 12:33:36.259649    4242 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1001 12:33:40.258822    4242 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.002460 seconds
	I1001 12:33:40.258883    4242 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1001 12:33:40.263743    4242 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1001 12:33:40.773731    4242 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1001 12:33:40.773963    4242 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-810000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1001 12:33:41.279592    4242 kubeadm.go:310] [bootstrap-token] Using token: x0io92.vmnxuthgf6zifeig
	I1001 12:33:41.285523    4242 out.go:235]   - Configuring RBAC rules ...
	I1001 12:33:41.285580    4242 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1001 12:33:41.285621    4242 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1001 12:33:41.287692    4242 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1001 12:33:41.290031    4242 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1001 12:33:41.290750    4242 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1001 12:33:41.291571    4242 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1001 12:33:41.294523    4242 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1001 12:33:41.461839    4242 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1001 12:33:41.685194    4242 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1001 12:33:41.685621    4242 kubeadm.go:310] 
	I1001 12:33:41.685650    4242 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1001 12:33:41.685654    4242 kubeadm.go:310] 
	I1001 12:33:41.685697    4242 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1001 12:33:41.685726    4242 kubeadm.go:310] 
	I1001 12:33:41.685740    4242 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1001 12:33:41.685780    4242 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1001 12:33:41.685806    4242 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1001 12:33:41.685809    4242 kubeadm.go:310] 
	I1001 12:33:41.685834    4242 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1001 12:33:41.685836    4242 kubeadm.go:310] 
	I1001 12:33:41.685857    4242 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1001 12:33:41.685861    4242 kubeadm.go:310] 
	I1001 12:33:41.685888    4242 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1001 12:33:41.685929    4242 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1001 12:33:41.685973    4242 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1001 12:33:41.685978    4242 kubeadm.go:310] 
	I1001 12:33:41.686023    4242 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1001 12:33:41.686068    4242 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1001 12:33:41.686072    4242 kubeadm.go:310] 
	I1001 12:33:41.686113    4242 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token x0io92.vmnxuthgf6zifeig \
	I1001 12:33:41.686163    4242 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1bec8634fed302f64212571ed3ed0831b844a21f4f42ed3778332e10a4ff7e9e \
	I1001 12:33:41.686174    4242 kubeadm.go:310] 	--control-plane 
	I1001 12:33:41.686178    4242 kubeadm.go:310] 
	I1001 12:33:41.686228    4242 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1001 12:33:41.686232    4242 kubeadm.go:310] 
	I1001 12:33:41.686275    4242 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token x0io92.vmnxuthgf6zifeig \
	I1001 12:33:41.686342    4242 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1bec8634fed302f64212571ed3ed0831b844a21f4f42ed3778332e10a4ff7e9e 
	I1001 12:33:41.686397    4242 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1001 12:33:41.686413    4242 cni.go:84] Creating CNI manager for ""
	I1001 12:33:41.686421    4242 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 12:33:41.689192    4242 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1001 12:33:41.693269    4242 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1001 12:33:41.696274    4242 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1001 12:33:41.702249    4242 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1001 12:33:41.702314    4242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 12:33:41.702349    4242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-810000 minikube.k8s.io/updated_at=2024_10_01T12_33_41_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4 minikube.k8s.io/name=running-upgrade-810000 minikube.k8s.io/primary=true
	I1001 12:33:41.744489    4242 kubeadm.go:1113] duration metric: took 42.229542ms to wait for elevateKubeSystemPrivileges
	I1001 12:33:41.744494    4242 ops.go:34] apiserver oom_adj: -16
	I1001 12:33:41.744603    4242 kubeadm.go:394] duration metric: took 4m11.167389209s to StartCluster
	I1001 12:33:41.744614    4242 settings.go:142] acquiring lock: {Name:mk456a8b96b1746a679d3a85129b9d4d9b38bdfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 12:33:41.744701    4242 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19736-1073/kubeconfig
	I1001 12:33:41.745077    4242 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19736-1073/kubeconfig: {Name:mkdfe60702c76fe804796a27b08676f2ebb5427f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 12:33:41.745454    4242 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 12:33:41.745458    4242 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1001 12:33:41.745492    4242 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-810000"
	I1001 12:33:41.745500    4242 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-810000"
	W1001 12:33:41.745503    4242 addons.go:243] addon storage-provisioner should already be in state true
	I1001 12:33:41.745516    4242 host.go:66] Checking if "running-upgrade-810000" exists ...
	I1001 12:33:41.745516    4242 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-810000"
	I1001 12:33:41.745559    4242 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-810000"
	I1001 12:33:41.745586    4242 config.go:182] Loaded profile config "running-upgrade-810000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1001 12:33:41.746409    4242 kapi.go:59] client config for running-upgrade-810000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/running-upgrade-810000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/running-upgrade-810000/client.key", CAFile:"/Users/jenkins/minikube-integration/19736-1073/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103f525d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1001 12:33:41.746532    4242 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-810000"
	W1001 12:33:41.746537    4242 addons.go:243] addon default-storageclass should already be in state true
	I1001 12:33:41.746543    4242 host.go:66] Checking if "running-upgrade-810000" exists ...
	I1001 12:33:41.748265    4242 out.go:177] * Verifying Kubernetes components...
	I1001 12:33:41.748571    4242 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1001 12:33:41.752549    4242 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1001 12:33:41.752560    4242 sshutil.go:53] new ssh client: &{IP:localhost Port:50260 SSHKeyPath:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/running-upgrade-810000/id_rsa Username:docker}
	I1001 12:33:41.756149    4242 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 12:33:41.760190    4242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 12:33:41.763183    4242 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 12:33:41.763188    4242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1001 12:33:41.763194    4242 sshutil.go:53] new ssh client: &{IP:localhost Port:50260 SSHKeyPath:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/running-upgrade-810000/id_rsa Username:docker}
	I1001 12:33:41.837300    4242 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 12:33:41.842722    4242 api_server.go:52] waiting for apiserver process to appear ...
	I1001 12:33:41.842772    4242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 12:33:41.847066    4242 api_server.go:72] duration metric: took 101.604166ms to wait for apiserver process to appear ...
	I1001 12:33:41.847075    4242 api_server.go:88] waiting for apiserver healthz status ...
	I1001 12:33:41.847082    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:33:41.851921    4242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 12:33:41.893566    4242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1001 12:33:42.200609    4242 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1001 12:33:42.200622    4242 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1001 12:33:46.845705    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:33:46.845812    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:33:51.842480    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:33:51.842500    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:33:56.839956    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:33:56.840010    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:34:01.838838    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:34:01.838887    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:34:06.838296    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:34:06.838345    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:34:11.838470    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:34:11.838518    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1001 12:34:12.185695    4242 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1001 12:34:12.190209    4242 out.go:177] * Enabled addons: storage-provisioner
	I1001 12:34:12.199076    4242 addons.go:510] duration metric: took 30.469652042s for enable addons: enabled=[storage-provisioner]
	I1001 12:34:16.839266    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:34:16.839309    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:34:21.840655    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:34:21.840697    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:34:26.842357    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:34:26.842382    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:34:31.843849    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:34:31.843899    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:34:36.844994    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:34:36.845016    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:34:41.846904    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:34:41.847050    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:34:41.860497    4242 logs.go:276] 1 containers: [b4b0ba48f60b]
	I1001 12:34:41.860592    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:34:41.873617    4242 logs.go:276] 1 containers: [4fffcaa9e400]
	I1001 12:34:41.873698    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:34:41.885101    4242 logs.go:276] 2 containers: [5e5e58a930ac c3764113e7e4]
	I1001 12:34:41.885185    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:34:41.910018    4242 logs.go:276] 1 containers: [3430a5479e9c]
	I1001 12:34:41.910110    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:34:41.937331    4242 logs.go:276] 1 containers: [ae0380eb6ceb]
	I1001 12:34:41.937416    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:34:41.948109    4242 logs.go:276] 1 containers: [38b93891ecd6]
	I1001 12:34:41.948184    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:34:41.958609    4242 logs.go:276] 0 containers: []
	W1001 12:34:41.958622    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:34:41.958684    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:34:41.969409    4242 logs.go:276] 1 containers: [97631f54aa43]
	I1001 12:34:41.969425    4242 logs.go:123] Gathering logs for kube-controller-manager [38b93891ecd6] ...
	I1001 12:34:41.969432    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38b93891ecd6"
	I1001 12:34:41.994363    4242 logs.go:123] Gathering logs for storage-provisioner [97631f54aa43] ...
	I1001 12:34:41.994379    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97631f54aa43"
	I1001 12:34:42.006466    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:34:42.006476    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:34:42.030780    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:34:42.030787    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:34:42.066417    4242 logs.go:123] Gathering logs for kube-apiserver [b4b0ba48f60b] ...
	I1001 12:34:42.066425    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b0ba48f60b"
	I1001 12:34:42.081402    4242 logs.go:123] Gathering logs for kube-proxy [ae0380eb6ceb] ...
	I1001 12:34:42.081413    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae0380eb6ceb"
	I1001 12:34:42.093626    4242 logs.go:123] Gathering logs for coredns [5e5e58a930ac] ...
	I1001 12:34:42.093639    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5e58a930ac"
	I1001 12:34:42.104878    4242 logs.go:123] Gathering logs for coredns [c3764113e7e4] ...
	I1001 12:34:42.104889    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3764113e7e4"
	I1001 12:34:42.116822    4242 logs.go:123] Gathering logs for kube-scheduler [3430a5479e9c] ...
	I1001 12:34:42.116831    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3430a5479e9c"
	I1001 12:34:42.132713    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:34:42.132725    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:34:42.144793    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:34:42.144805    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:34:42.149686    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:34:42.149693    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:34:42.225505    4242 logs.go:123] Gathering logs for etcd [4fffcaa9e400] ...
	I1001 12:34:42.225521    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fffcaa9e400"
	I1001 12:34:44.739905    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:34:49.741860    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:34:49.742081    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:34:49.755488    4242 logs.go:276] 1 containers: [b4b0ba48f60b]
	I1001 12:34:49.755582    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:34:49.767237    4242 logs.go:276] 1 containers: [4fffcaa9e400]
	I1001 12:34:49.767327    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:34:49.777767    4242 logs.go:276] 2 containers: [5e5e58a930ac c3764113e7e4]
	I1001 12:34:49.777856    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:34:49.788537    4242 logs.go:276] 1 containers: [3430a5479e9c]
	I1001 12:34:49.788627    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:34:49.803044    4242 logs.go:276] 1 containers: [ae0380eb6ceb]
	I1001 12:34:49.803132    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:34:49.814556    4242 logs.go:276] 1 containers: [38b93891ecd6]
	I1001 12:34:49.814642    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:34:49.824982    4242 logs.go:276] 0 containers: []
	W1001 12:34:49.824998    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:34:49.825074    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:34:49.835198    4242 logs.go:276] 1 containers: [97631f54aa43]
	I1001 12:34:49.835216    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:34:49.835223    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:34:49.846635    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:34:49.846646    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:34:49.882004    4242 logs.go:123] Gathering logs for etcd [4fffcaa9e400] ...
	I1001 12:34:49.882019    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fffcaa9e400"
	I1001 12:34:49.896360    4242 logs.go:123] Gathering logs for kube-scheduler [3430a5479e9c] ...
	I1001 12:34:49.896373    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3430a5479e9c"
	I1001 12:34:49.911970    4242 logs.go:123] Gathering logs for kube-controller-manager [38b93891ecd6] ...
	I1001 12:34:49.911982    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38b93891ecd6"
	I1001 12:34:49.930642    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:34:49.930652    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:34:49.953504    4242 logs.go:123] Gathering logs for kube-proxy [ae0380eb6ceb] ...
	I1001 12:34:49.953511    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae0380eb6ceb"
	I1001 12:34:49.965421    4242 logs.go:123] Gathering logs for storage-provisioner [97631f54aa43] ...
	I1001 12:34:49.965433    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97631f54aa43"
	I1001 12:34:49.981382    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:34:49.981394    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:34:50.016209    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:34:50.016215    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:34:50.021011    4242 logs.go:123] Gathering logs for kube-apiserver [b4b0ba48f60b] ...
	I1001 12:34:50.021021    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b0ba48f60b"
	I1001 12:34:50.035150    4242 logs.go:123] Gathering logs for coredns [5e5e58a930ac] ...
	I1001 12:34:50.035161    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5e58a930ac"
	I1001 12:34:50.046330    4242 logs.go:123] Gathering logs for coredns [c3764113e7e4] ...
	I1001 12:34:50.046340    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3764113e7e4"
	I1001 12:34:52.558070    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:34:57.560225    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:34:57.560703    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:34:57.595335    4242 logs.go:276] 1 containers: [b4b0ba48f60b]
	I1001 12:34:57.595507    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:34:57.615255    4242 logs.go:276] 1 containers: [4fffcaa9e400]
	I1001 12:34:57.615373    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:34:57.631256    4242 logs.go:276] 2 containers: [5e5e58a930ac c3764113e7e4]
	I1001 12:34:57.631333    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:34:57.643465    4242 logs.go:276] 1 containers: [3430a5479e9c]
	I1001 12:34:57.643556    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:34:57.658616    4242 logs.go:276] 1 containers: [ae0380eb6ceb]
	I1001 12:34:57.658694    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:34:57.669272    4242 logs.go:276] 1 containers: [38b93891ecd6]
	I1001 12:34:57.669362    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:34:57.679800    4242 logs.go:276] 0 containers: []
	W1001 12:34:57.679812    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:34:57.679889    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:34:57.689987    4242 logs.go:276] 1 containers: [97631f54aa43]
	I1001 12:34:57.690002    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:34:57.690007    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:34:57.727562    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:34:57.727570    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:34:57.764445    4242 logs.go:123] Gathering logs for kube-apiserver [b4b0ba48f60b] ...
	I1001 12:34:57.764456    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b0ba48f60b"
	I1001 12:34:57.778603    4242 logs.go:123] Gathering logs for etcd [4fffcaa9e400] ...
	I1001 12:34:57.778614    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fffcaa9e400"
	I1001 12:34:57.804860    4242 logs.go:123] Gathering logs for coredns [5e5e58a930ac] ...
	I1001 12:34:57.804872    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5e58a930ac"
	I1001 12:34:57.817413    4242 logs.go:123] Gathering logs for kube-controller-manager [38b93891ecd6] ...
	I1001 12:34:57.817429    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38b93891ecd6"
	I1001 12:34:57.834868    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:34:57.834881    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:34:57.847138    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:34:57.847155    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:34:57.851507    4242 logs.go:123] Gathering logs for coredns [c3764113e7e4] ...
	I1001 12:34:57.851515    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3764113e7e4"
	I1001 12:34:57.862926    4242 logs.go:123] Gathering logs for kube-scheduler [3430a5479e9c] ...
	I1001 12:34:57.862961    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3430a5479e9c"
	I1001 12:34:57.878949    4242 logs.go:123] Gathering logs for kube-proxy [ae0380eb6ceb] ...
	I1001 12:34:57.878965    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae0380eb6ceb"
	I1001 12:34:57.890316    4242 logs.go:123] Gathering logs for storage-provisioner [97631f54aa43] ...
	I1001 12:34:57.890326    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97631f54aa43"
	I1001 12:34:57.901460    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:34:57.901470    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:35:00.425867    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:35:05.427927    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:35:05.428290    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:35:05.454372    4242 logs.go:276] 1 containers: [b4b0ba48f60b]
	I1001 12:35:05.454525    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:35:05.472862    4242 logs.go:276] 1 containers: [4fffcaa9e400]
	I1001 12:35:05.472958    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:35:05.486488    4242 logs.go:276] 2 containers: [5e5e58a930ac c3764113e7e4]
	I1001 12:35:05.486585    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:35:05.497661    4242 logs.go:276] 1 containers: [3430a5479e9c]
	I1001 12:35:05.497743    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:35:05.508493    4242 logs.go:276] 1 containers: [ae0380eb6ceb]
	I1001 12:35:05.508578    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:35:05.518869    4242 logs.go:276] 1 containers: [38b93891ecd6]
	I1001 12:35:05.518946    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:35:05.530629    4242 logs.go:276] 0 containers: []
	W1001 12:35:05.530641    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:35:05.530706    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:35:05.540645    4242 logs.go:276] 1 containers: [97631f54aa43]
	I1001 12:35:05.540661    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:35:05.540666    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:35:05.564006    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:35:05.564014    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:35:05.598379    4242 logs.go:123] Gathering logs for kube-apiserver [b4b0ba48f60b] ...
	I1001 12:35:05.598387    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b0ba48f60b"
	I1001 12:35:05.612324    4242 logs.go:123] Gathering logs for etcd [4fffcaa9e400] ...
	I1001 12:35:05.612334    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fffcaa9e400"
	I1001 12:35:05.626291    4242 logs.go:123] Gathering logs for coredns [c3764113e7e4] ...
	I1001 12:35:05.626307    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3764113e7e4"
	I1001 12:35:05.637913    4242 logs.go:123] Gathering logs for kube-proxy [ae0380eb6ceb] ...
	I1001 12:35:05.637929    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae0380eb6ceb"
	I1001 12:35:05.649476    4242 logs.go:123] Gathering logs for kube-controller-manager [38b93891ecd6] ...
	I1001 12:35:05.649492    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38b93891ecd6"
	I1001 12:35:05.666914    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:35:05.666924    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:35:05.671778    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:35:05.671784    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:35:05.710136    4242 logs.go:123] Gathering logs for coredns [5e5e58a930ac] ...
	I1001 12:35:05.710149    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5e58a930ac"
	I1001 12:35:05.721612    4242 logs.go:123] Gathering logs for kube-scheduler [3430a5479e9c] ...
	I1001 12:35:05.721622    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3430a5479e9c"
	I1001 12:35:05.736811    4242 logs.go:123] Gathering logs for storage-provisioner [97631f54aa43] ...
	I1001 12:35:05.736824    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97631f54aa43"
	I1001 12:35:05.748672    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:35:05.748688    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:35:08.264043    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:35:13.265864    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:35:13.265986    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:35:13.280765    4242 logs.go:276] 1 containers: [b4b0ba48f60b]
	I1001 12:35:13.280852    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:35:13.291199    4242 logs.go:276] 1 containers: [4fffcaa9e400]
	I1001 12:35:13.291274    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:35:13.306105    4242 logs.go:276] 2 containers: [5e5e58a930ac c3764113e7e4]
	I1001 12:35:13.306187    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:35:13.316238    4242 logs.go:276] 1 containers: [3430a5479e9c]
	I1001 12:35:13.316318    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:35:13.327003    4242 logs.go:276] 1 containers: [ae0380eb6ceb]
	I1001 12:35:13.327087    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:35:13.337573    4242 logs.go:276] 1 containers: [38b93891ecd6]
	I1001 12:35:13.337659    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:35:13.347522    4242 logs.go:276] 0 containers: []
	W1001 12:35:13.347539    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:35:13.347600    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:35:13.357655    4242 logs.go:276] 1 containers: [97631f54aa43]
	I1001 12:35:13.357671    4242 logs.go:123] Gathering logs for coredns [c3764113e7e4] ...
	I1001 12:35:13.357676    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3764113e7e4"
	I1001 12:35:13.374205    4242 logs.go:123] Gathering logs for kube-scheduler [3430a5479e9c] ...
	I1001 12:35:13.374215    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3430a5479e9c"
	I1001 12:35:13.394413    4242 logs.go:123] Gathering logs for kube-proxy [ae0380eb6ceb] ...
	I1001 12:35:13.394426    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae0380eb6ceb"
	I1001 12:35:13.405599    4242 logs.go:123] Gathering logs for kube-controller-manager [38b93891ecd6] ...
	I1001 12:35:13.405612    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38b93891ecd6"
	I1001 12:35:13.422376    4242 logs.go:123] Gathering logs for storage-provisioner [97631f54aa43] ...
	I1001 12:35:13.422392    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97631f54aa43"
	I1001 12:35:13.433775    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:35:13.433788    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:35:13.438526    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:35:13.438536    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:35:13.474655    4242 logs.go:123] Gathering logs for coredns [5e5e58a930ac] ...
	I1001 12:35:13.474667    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5e58a930ac"
	I1001 12:35:13.486509    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:35:13.486521    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:35:13.498485    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:35:13.498500    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:35:13.522052    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:35:13.522061    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:35:13.559267    4242 logs.go:123] Gathering logs for kube-apiserver [b4b0ba48f60b] ...
	I1001 12:35:13.559282    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b0ba48f60b"
	I1001 12:35:13.573650    4242 logs.go:123] Gathering logs for etcd [4fffcaa9e400] ...
	I1001 12:35:13.573665    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fffcaa9e400"
	I1001 12:35:16.090396    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:35:21.092557    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:35:21.093142    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:35:21.132238    4242 logs.go:276] 1 containers: [b4b0ba48f60b]
	I1001 12:35:21.132394    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:35:21.158742    4242 logs.go:276] 1 containers: [4fffcaa9e400]
	I1001 12:35:21.158852    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:35:21.172868    4242 logs.go:276] 2 containers: [5e5e58a930ac c3764113e7e4]
	I1001 12:35:21.172962    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:35:21.186101    4242 logs.go:276] 1 containers: [3430a5479e9c]
	I1001 12:35:21.186189    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:35:21.197103    4242 logs.go:276] 1 containers: [ae0380eb6ceb]
	I1001 12:35:21.197178    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:35:21.207758    4242 logs.go:276] 1 containers: [38b93891ecd6]
	I1001 12:35:21.207842    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:35:21.218278    4242 logs.go:276] 0 containers: []
	W1001 12:35:21.218289    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:35:21.218361    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:35:21.228470    4242 logs.go:276] 1 containers: [97631f54aa43]
	I1001 12:35:21.228486    4242 logs.go:123] Gathering logs for storage-provisioner [97631f54aa43] ...
	I1001 12:35:21.228491    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97631f54aa43"
	I1001 12:35:21.240414    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:35:21.240430    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:35:21.276787    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:35:21.276798    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:35:21.281308    4242 logs.go:123] Gathering logs for coredns [5e5e58a930ac] ...
	I1001 12:35:21.281314    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5e58a930ac"
	I1001 12:35:21.293333    4242 logs.go:123] Gathering logs for coredns [c3764113e7e4] ...
	I1001 12:35:21.293345    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3764113e7e4"
	I1001 12:35:21.306993    4242 logs.go:123] Gathering logs for kube-scheduler [3430a5479e9c] ...
	I1001 12:35:21.307003    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3430a5479e9c"
	I1001 12:35:21.322078    4242 logs.go:123] Gathering logs for kube-proxy [ae0380eb6ceb] ...
	I1001 12:35:21.322088    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae0380eb6ceb"
	I1001 12:35:21.333995    4242 logs.go:123] Gathering logs for kube-controller-manager [38b93891ecd6] ...
	I1001 12:35:21.334011    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38b93891ecd6"
	I1001 12:35:21.351158    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:35:21.351169    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:35:21.376343    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:35:21.376351    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:35:21.414841    4242 logs.go:123] Gathering logs for kube-apiserver [b4b0ba48f60b] ...
	I1001 12:35:21.414858    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b0ba48f60b"
	I1001 12:35:21.428929    4242 logs.go:123] Gathering logs for etcd [4fffcaa9e400] ...
	I1001 12:35:21.428939    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fffcaa9e400"
	I1001 12:35:21.442885    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:35:21.442901    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:35:23.956546    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:35:28.959336    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:35:28.959861    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:35:28.999202    4242 logs.go:276] 1 containers: [b4b0ba48f60b]
	I1001 12:35:28.999378    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:35:29.020723    4242 logs.go:276] 1 containers: [4fffcaa9e400]
	I1001 12:35:29.020842    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:35:29.036696    4242 logs.go:276] 2 containers: [5e5e58a930ac c3764113e7e4]
	I1001 12:35:29.036798    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:35:29.049570    4242 logs.go:276] 1 containers: [3430a5479e9c]
	I1001 12:35:29.049665    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:35:29.060591    4242 logs.go:276] 1 containers: [ae0380eb6ceb]
	I1001 12:35:29.060678    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:35:29.071524    4242 logs.go:276] 1 containers: [38b93891ecd6]
	I1001 12:35:29.071610    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:35:29.081752    4242 logs.go:276] 0 containers: []
	W1001 12:35:29.081767    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:35:29.081839    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:35:29.092108    4242 logs.go:276] 1 containers: [97631f54aa43]
	I1001 12:35:29.092124    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:35:29.092130    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:35:29.096607    4242 logs.go:123] Gathering logs for etcd [4fffcaa9e400] ...
	I1001 12:35:29.096614    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fffcaa9e400"
	I1001 12:35:29.110806    4242 logs.go:123] Gathering logs for coredns [5e5e58a930ac] ...
	I1001 12:35:29.110816    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5e58a930ac"
	I1001 12:35:29.122958    4242 logs.go:123] Gathering logs for kube-scheduler [3430a5479e9c] ...
	I1001 12:35:29.122968    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3430a5479e9c"
	I1001 12:35:29.138939    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:35:29.138952    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:35:29.151841    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:35:29.151854    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:35:29.178088    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:35:29.178101    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:35:29.213952    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:35:29.213961    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:35:29.252318    4242 logs.go:123] Gathering logs for kube-apiserver [b4b0ba48f60b] ...
	I1001 12:35:29.252333    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b0ba48f60b"
	I1001 12:35:29.266816    4242 logs.go:123] Gathering logs for coredns [c3764113e7e4] ...
	I1001 12:35:29.266833    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3764113e7e4"
	I1001 12:35:29.278925    4242 logs.go:123] Gathering logs for kube-proxy [ae0380eb6ceb] ...
	I1001 12:35:29.278937    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae0380eb6ceb"
	I1001 12:35:29.291144    4242 logs.go:123] Gathering logs for kube-controller-manager [38b93891ecd6] ...
	I1001 12:35:29.291155    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38b93891ecd6"
	I1001 12:35:29.309390    4242 logs.go:123] Gathering logs for storage-provisioner [97631f54aa43] ...
	I1001 12:35:29.309401    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97631f54aa43"
	I1001 12:35:31.822815    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:35:36.825111    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:35:36.825344    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:35:36.843600    4242 logs.go:276] 1 containers: [b4b0ba48f60b]
	I1001 12:35:36.843713    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:35:36.857892    4242 logs.go:276] 1 containers: [4fffcaa9e400]
	I1001 12:35:36.857982    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:35:36.869513    4242 logs.go:276] 2 containers: [5e5e58a930ac c3764113e7e4]
	I1001 12:35:36.869591    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:35:36.880173    4242 logs.go:276] 1 containers: [3430a5479e9c]
	I1001 12:35:36.880260    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:35:36.890460    4242 logs.go:276] 1 containers: [ae0380eb6ceb]
	I1001 12:35:36.890540    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:35:36.900950    4242 logs.go:276] 1 containers: [38b93891ecd6]
	I1001 12:35:36.901035    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:35:36.911888    4242 logs.go:276] 0 containers: []
	W1001 12:35:36.911900    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:35:36.911977    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:35:36.921903    4242 logs.go:276] 1 containers: [97631f54aa43]
	I1001 12:35:36.921919    4242 logs.go:123] Gathering logs for kube-apiserver [b4b0ba48f60b] ...
	I1001 12:35:36.921925    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b0ba48f60b"
	I1001 12:35:36.935861    4242 logs.go:123] Gathering logs for etcd [4fffcaa9e400] ...
	I1001 12:35:36.935872    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fffcaa9e400"
	I1001 12:35:36.949848    4242 logs.go:123] Gathering logs for coredns [5e5e58a930ac] ...
	I1001 12:35:36.949859    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5e58a930ac"
	I1001 12:35:36.961439    4242 logs.go:123] Gathering logs for coredns [c3764113e7e4] ...
	I1001 12:35:36.961449    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3764113e7e4"
	I1001 12:35:36.976581    4242 logs.go:123] Gathering logs for kube-scheduler [3430a5479e9c] ...
	I1001 12:35:36.976598    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3430a5479e9c"
	I1001 12:35:36.993081    4242 logs.go:123] Gathering logs for kube-proxy [ae0380eb6ceb] ...
	I1001 12:35:36.993093    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae0380eb6ceb"
	I1001 12:35:37.004812    4242 logs.go:123] Gathering logs for kube-controller-manager [38b93891ecd6] ...
	I1001 12:35:37.004822    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38b93891ecd6"
	I1001 12:35:37.022023    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:35:37.022034    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:35:37.059086    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:35:37.059097    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:35:37.070643    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:35:37.070656    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:35:37.094481    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:35:37.094489    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:35:37.130619    4242 logs.go:123] Gathering logs for storage-provisioner [97631f54aa43] ...
	I1001 12:35:37.130635    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97631f54aa43"
	I1001 12:35:37.144974    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:35:37.144989    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:35:39.651695    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:35:44.654279    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:35:44.654758    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:35:44.687058    4242 logs.go:276] 1 containers: [b4b0ba48f60b]
	I1001 12:35:44.687240    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:35:44.706645    4242 logs.go:276] 1 containers: [4fffcaa9e400]
	I1001 12:35:44.706761    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:35:44.726125    4242 logs.go:276] 2 containers: [5e5e58a930ac c3764113e7e4]
	I1001 12:35:44.726226    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:35:44.745151    4242 logs.go:276] 1 containers: [3430a5479e9c]
	I1001 12:35:44.745238    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:35:44.755848    4242 logs.go:276] 1 containers: [ae0380eb6ceb]
	I1001 12:35:44.755936    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:35:44.766519    4242 logs.go:276] 1 containers: [38b93891ecd6]
	I1001 12:35:44.766598    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:35:44.776493    4242 logs.go:276] 0 containers: []
	W1001 12:35:44.776509    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:35:44.776586    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:35:44.787179    4242 logs.go:276] 1 containers: [97631f54aa43]
	I1001 12:35:44.787194    4242 logs.go:123] Gathering logs for storage-provisioner [97631f54aa43] ...
	I1001 12:35:44.787199    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97631f54aa43"
	I1001 12:35:44.799282    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:35:44.799294    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:35:44.810739    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:35:44.810756    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:35:44.845172    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:35:44.845180    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:35:44.850574    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:35:44.850584    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:35:44.886074    4242 logs.go:123] Gathering logs for etcd [4fffcaa9e400] ...
	I1001 12:35:44.886087    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fffcaa9e400"
	I1001 12:35:44.899734    4242 logs.go:123] Gathering logs for coredns [c3764113e7e4] ...
	I1001 12:35:44.899749    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3764113e7e4"
	I1001 12:35:44.911284    4242 logs.go:123] Gathering logs for kube-controller-manager [38b93891ecd6] ...
	I1001 12:35:44.911294    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38b93891ecd6"
	I1001 12:35:44.928535    4242 logs.go:123] Gathering logs for kube-apiserver [b4b0ba48f60b] ...
	I1001 12:35:44.928548    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b0ba48f60b"
	I1001 12:35:44.943675    4242 logs.go:123] Gathering logs for coredns [5e5e58a930ac] ...
	I1001 12:35:44.943691    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5e58a930ac"
	I1001 12:35:44.957961    4242 logs.go:123] Gathering logs for kube-scheduler [3430a5479e9c] ...
	I1001 12:35:44.957974    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3430a5479e9c"
	I1001 12:35:44.973907    4242 logs.go:123] Gathering logs for kube-proxy [ae0380eb6ceb] ...
	I1001 12:35:44.973921    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae0380eb6ceb"
	I1001 12:35:44.985205    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:35:44.985218    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:35:47.510689    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:35:52.513074    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:35:52.513671    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:35:52.552216    4242 logs.go:276] 1 containers: [b4b0ba48f60b]
	I1001 12:35:52.552402    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:35:52.574009    4242 logs.go:276] 1 containers: [4fffcaa9e400]
	I1001 12:35:52.574152    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:35:52.588936    4242 logs.go:276] 2 containers: [5e5e58a930ac c3764113e7e4]
	I1001 12:35:52.589038    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:35:52.601610    4242 logs.go:276] 1 containers: [3430a5479e9c]
	I1001 12:35:52.601702    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:35:52.612809    4242 logs.go:276] 1 containers: [ae0380eb6ceb]
	I1001 12:35:52.612890    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:35:52.623406    4242 logs.go:276] 1 containers: [38b93891ecd6]
	I1001 12:35:52.623482    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:35:52.634141    4242 logs.go:276] 0 containers: []
	W1001 12:35:52.634160    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:35:52.634232    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:35:52.647188    4242 logs.go:276] 1 containers: [97631f54aa43]
	I1001 12:35:52.647202    4242 logs.go:123] Gathering logs for coredns [c3764113e7e4] ...
	I1001 12:35:52.647208    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3764113e7e4"
	I1001 12:35:52.664231    4242 logs.go:123] Gathering logs for kube-scheduler [3430a5479e9c] ...
	I1001 12:35:52.664242    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3430a5479e9c"
	I1001 12:35:52.679948    4242 logs.go:123] Gathering logs for kube-proxy [ae0380eb6ceb] ...
	I1001 12:35:52.679960    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae0380eb6ceb"
	I1001 12:35:52.696374    4242 logs.go:123] Gathering logs for kube-controller-manager [38b93891ecd6] ...
	I1001 12:35:52.696385    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38b93891ecd6"
	I1001 12:35:52.715484    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:35:52.715495    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:35:52.751797    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:35:52.751805    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:35:52.756217    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:35:52.756225    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:35:52.794127    4242 logs.go:123] Gathering logs for storage-provisioner [97631f54aa43] ...
	I1001 12:35:52.794142    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97631f54aa43"
	I1001 12:35:52.806884    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:35:52.806896    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:35:52.830232    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:35:52.830241    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:35:52.841194    4242 logs.go:123] Gathering logs for kube-apiserver [b4b0ba48f60b] ...
	I1001 12:35:52.841207    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b0ba48f60b"
	I1001 12:35:52.859483    4242 logs.go:123] Gathering logs for etcd [4fffcaa9e400] ...
	I1001 12:35:52.859494    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fffcaa9e400"
	I1001 12:35:52.873111    4242 logs.go:123] Gathering logs for coredns [5e5e58a930ac] ...
	I1001 12:35:52.873121    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5e58a930ac"
	I1001 12:35:55.386841    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:36:00.388199    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:36:00.388397    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:36:00.401553    4242 logs.go:276] 1 containers: [b4b0ba48f60b]
	I1001 12:36:00.401647    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:36:00.412254    4242 logs.go:276] 1 containers: [4fffcaa9e400]
	I1001 12:36:00.412343    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:36:00.424979    4242 logs.go:276] 4 containers: [f312b9c9ac08 1242378878f5 5e5e58a930ac c3764113e7e4]
	I1001 12:36:00.425054    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:36:00.435456    4242 logs.go:276] 1 containers: [3430a5479e9c]
	I1001 12:36:00.435539    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:36:00.446504    4242 logs.go:276] 1 containers: [ae0380eb6ceb]
	I1001 12:36:00.446592    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:36:00.457245    4242 logs.go:276] 1 containers: [38b93891ecd6]
	I1001 12:36:00.457322    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:36:00.467591    4242 logs.go:276] 0 containers: []
	W1001 12:36:00.467605    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:36:00.467676    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:36:00.478298    4242 logs.go:276] 1 containers: [97631f54aa43]
	I1001 12:36:00.478315    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:36:00.478320    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:36:00.489851    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:36:00.489862    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:36:00.526237    4242 logs.go:123] Gathering logs for kube-proxy [ae0380eb6ceb] ...
	I1001 12:36:00.526248    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae0380eb6ceb"
	I1001 12:36:00.538048    4242 logs.go:123] Gathering logs for kube-scheduler [3430a5479e9c] ...
	I1001 12:36:00.538060    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3430a5479e9c"
	I1001 12:36:00.556941    4242 logs.go:123] Gathering logs for storage-provisioner [97631f54aa43] ...
	I1001 12:36:00.556952    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97631f54aa43"
	I1001 12:36:00.571755    4242 logs.go:123] Gathering logs for kube-apiserver [b4b0ba48f60b] ...
	I1001 12:36:00.571766    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b0ba48f60b"
	I1001 12:36:00.586154    4242 logs.go:123] Gathering logs for coredns [5e5e58a930ac] ...
	I1001 12:36:00.586165    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5e58a930ac"
	I1001 12:36:00.598498    4242 logs.go:123] Gathering logs for etcd [4fffcaa9e400] ...
	I1001 12:36:00.598511    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fffcaa9e400"
	I1001 12:36:00.613442    4242 logs.go:123] Gathering logs for coredns [f312b9c9ac08] ...
	I1001 12:36:00.613453    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f312b9c9ac08"
	I1001 12:36:00.628584    4242 logs.go:123] Gathering logs for coredns [c3764113e7e4] ...
	I1001 12:36:00.628594    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3764113e7e4"
	I1001 12:36:00.639953    4242 logs.go:123] Gathering logs for kube-controller-manager [38b93891ecd6] ...
	I1001 12:36:00.639962    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38b93891ecd6"
	I1001 12:36:00.657693    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:36:00.657704    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:36:00.682313    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:36:00.682323    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:36:00.686802    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:36:00.686815    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:36:00.721970    4242 logs.go:123] Gathering logs for coredns [1242378878f5] ...
	I1001 12:36:00.721986    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1242378878f5"
	I1001 12:36:03.235101    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:36:08.237209    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:36:08.237400    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:36:08.249595    4242 logs.go:276] 1 containers: [b4b0ba48f60b]
	I1001 12:36:08.249689    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:36:08.260471    4242 logs.go:276] 1 containers: [4fffcaa9e400]
	I1001 12:36:08.260563    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:36:08.271588    4242 logs.go:276] 4 containers: [f312b9c9ac08 1242378878f5 5e5e58a930ac c3764113e7e4]
	I1001 12:36:08.271677    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:36:08.286969    4242 logs.go:276] 1 containers: [3430a5479e9c]
	I1001 12:36:08.287058    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:36:08.298373    4242 logs.go:276] 1 containers: [ae0380eb6ceb]
	I1001 12:36:08.298460    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:36:08.309118    4242 logs.go:276] 1 containers: [38b93891ecd6]
	I1001 12:36:08.309205    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:36:08.320346    4242 logs.go:276] 0 containers: []
	W1001 12:36:08.320357    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:36:08.320425    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:36:08.331540    4242 logs.go:276] 1 containers: [97631f54aa43]
	I1001 12:36:08.331558    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:36:08.331564    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:36:08.366350    4242 logs.go:123] Gathering logs for kube-apiserver [b4b0ba48f60b] ...
	I1001 12:36:08.366362    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b0ba48f60b"
	I1001 12:36:08.381441    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:36:08.381455    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:36:08.405687    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:36:08.405696    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:36:08.416582    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:36:08.416593    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:36:08.421438    4242 logs.go:123] Gathering logs for coredns [5e5e58a930ac] ...
	I1001 12:36:08.421446    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5e58a930ac"
	I1001 12:36:08.433681    4242 logs.go:123] Gathering logs for coredns [c3764113e7e4] ...
	I1001 12:36:08.433694    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3764113e7e4"
	I1001 12:36:08.449933    4242 logs.go:123] Gathering logs for kube-scheduler [3430a5479e9c] ...
	I1001 12:36:08.449945    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3430a5479e9c"
	I1001 12:36:08.465497    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:36:08.465507    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:36:08.503900    4242 logs.go:123] Gathering logs for coredns [f312b9c9ac08] ...
	I1001 12:36:08.503919    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f312b9c9ac08"
	I1001 12:36:08.520157    4242 logs.go:123] Gathering logs for kube-proxy [ae0380eb6ceb] ...
	I1001 12:36:08.520171    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae0380eb6ceb"
	I1001 12:36:08.532224    4242 logs.go:123] Gathering logs for storage-provisioner [97631f54aa43] ...
	I1001 12:36:08.532241    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97631f54aa43"
	I1001 12:36:08.543954    4242 logs.go:123] Gathering logs for etcd [4fffcaa9e400] ...
	I1001 12:36:08.543965    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fffcaa9e400"
	I1001 12:36:08.557698    4242 logs.go:123] Gathering logs for coredns [1242378878f5] ...
	I1001 12:36:08.557714    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1242378878f5"
	I1001 12:36:08.568807    4242 logs.go:123] Gathering logs for kube-controller-manager [38b93891ecd6] ...
	I1001 12:36:08.568818    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38b93891ecd6"
	I1001 12:36:11.088829    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:36:16.091651    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:36:16.092022    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:36:16.119341    4242 logs.go:276] 1 containers: [b4b0ba48f60b]
	I1001 12:36:16.119493    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:36:16.138691    4242 logs.go:276] 1 containers: [4fffcaa9e400]
	I1001 12:36:16.138807    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:36:16.153004    4242 logs.go:276] 4 containers: [f312b9c9ac08 1242378878f5 5e5e58a930ac c3764113e7e4]
	I1001 12:36:16.153099    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:36:16.169724    4242 logs.go:276] 1 containers: [3430a5479e9c]
	I1001 12:36:16.169811    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:36:16.180866    4242 logs.go:276] 1 containers: [ae0380eb6ceb]
	I1001 12:36:16.180950    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:36:16.194652    4242 logs.go:276] 1 containers: [38b93891ecd6]
	I1001 12:36:16.194740    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:36:16.204716    4242 logs.go:276] 0 containers: []
	W1001 12:36:16.204730    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:36:16.204803    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:36:16.215463    4242 logs.go:276] 1 containers: [97631f54aa43]
	I1001 12:36:16.215484    4242 logs.go:123] Gathering logs for coredns [f312b9c9ac08] ...
	I1001 12:36:16.215490    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f312b9c9ac08"
	I1001 12:36:16.226834    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:36:16.226846    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:36:16.239772    4242 logs.go:123] Gathering logs for coredns [1242378878f5] ...
	I1001 12:36:16.239786    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1242378878f5"
	I1001 12:36:16.251222    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:36:16.251237    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:36:16.285810    4242 logs.go:123] Gathering logs for coredns [c3764113e7e4] ...
	I1001 12:36:16.285818    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3764113e7e4"
	I1001 12:36:16.297431    4242 logs.go:123] Gathering logs for kube-proxy [ae0380eb6ceb] ...
	I1001 12:36:16.297443    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae0380eb6ceb"
	I1001 12:36:16.309597    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:36:16.309609    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:36:16.314301    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:36:16.314307    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:36:16.352957    4242 logs.go:123] Gathering logs for kube-apiserver [b4b0ba48f60b] ...
	I1001 12:36:16.352968    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b0ba48f60b"
	I1001 12:36:16.370066    4242 logs.go:123] Gathering logs for etcd [4fffcaa9e400] ...
	I1001 12:36:16.370078    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fffcaa9e400"
	I1001 12:36:16.391676    4242 logs.go:123] Gathering logs for coredns [5e5e58a930ac] ...
	I1001 12:36:16.391687    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5e58a930ac"
	I1001 12:36:16.403310    4242 logs.go:123] Gathering logs for kube-scheduler [3430a5479e9c] ...
	I1001 12:36:16.403324    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3430a5479e9c"
	I1001 12:36:16.418577    4242 logs.go:123] Gathering logs for kube-controller-manager [38b93891ecd6] ...
	I1001 12:36:16.418590    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38b93891ecd6"
	I1001 12:36:16.436311    4242 logs.go:123] Gathering logs for storage-provisioner [97631f54aa43] ...
	I1001 12:36:16.436322    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97631f54aa43"
	I1001 12:36:16.447843    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:36:16.447856    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:36:18.974808    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:36:23.977040    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:36:23.977290    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:36:24.001191    4242 logs.go:276] 1 containers: [b4b0ba48f60b]
	I1001 12:36:24.001310    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:36:24.015197    4242 logs.go:276] 1 containers: [4fffcaa9e400]
	I1001 12:36:24.015301    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:36:24.027004    4242 logs.go:276] 4 containers: [f312b9c9ac08 1242378878f5 5e5e58a930ac c3764113e7e4]
	I1001 12:36:24.027086    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:36:24.038541    4242 logs.go:276] 1 containers: [3430a5479e9c]
	I1001 12:36:24.038627    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:36:24.048940    4242 logs.go:276] 1 containers: [ae0380eb6ceb]
	I1001 12:36:24.049023    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:36:24.059777    4242 logs.go:276] 1 containers: [38b93891ecd6]
	I1001 12:36:24.059868    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:36:24.070728    4242 logs.go:276] 0 containers: []
	W1001 12:36:24.070740    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:36:24.070812    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:36:24.081440    4242 logs.go:276] 1 containers: [97631f54aa43]
	I1001 12:36:24.081457    4242 logs.go:123] Gathering logs for coredns [f312b9c9ac08] ...
	I1001 12:36:24.081463    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f312b9c9ac08"
	I1001 12:36:24.094175    4242 logs.go:123] Gathering logs for kube-controller-manager [38b93891ecd6] ...
	I1001 12:36:24.094186    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38b93891ecd6"
	I1001 12:36:24.113026    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:36:24.113039    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:36:24.124976    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:36:24.124988    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:36:24.162356    4242 logs.go:123] Gathering logs for kube-proxy [ae0380eb6ceb] ...
	I1001 12:36:24.162368    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae0380eb6ceb"
	I1001 12:36:24.174242    4242 logs.go:123] Gathering logs for etcd [4fffcaa9e400] ...
	I1001 12:36:24.174253    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fffcaa9e400"
	I1001 12:36:24.188450    4242 logs.go:123] Gathering logs for coredns [5e5e58a930ac] ...
	I1001 12:36:24.188461    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5e58a930ac"
	I1001 12:36:24.204969    4242 logs.go:123] Gathering logs for coredns [c3764113e7e4] ...
	I1001 12:36:24.204982    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3764113e7e4"
	I1001 12:36:24.218951    4242 logs.go:123] Gathering logs for kube-scheduler [3430a5479e9c] ...
	I1001 12:36:24.218963    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3430a5479e9c"
	I1001 12:36:24.238699    4242 logs.go:123] Gathering logs for storage-provisioner [97631f54aa43] ...
	I1001 12:36:24.238712    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97631f54aa43"
	I1001 12:36:24.254819    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:36:24.254831    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:36:24.259909    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:36:24.259915    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:36:24.301633    4242 logs.go:123] Gathering logs for kube-apiserver [b4b0ba48f60b] ...
	I1001 12:36:24.301647    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b0ba48f60b"
	I1001 12:36:24.317853    4242 logs.go:123] Gathering logs for coredns [1242378878f5] ...
	I1001 12:36:24.317864    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1242378878f5"
	I1001 12:36:24.330466    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:36:24.330480    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:36:26.856362    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:36:31.858645    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:36:31.858841    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:36:31.873130    4242 logs.go:276] 1 containers: [b4b0ba48f60b]
	I1001 12:36:31.873227    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:36:31.896927    4242 logs.go:276] 1 containers: [4fffcaa9e400]
	I1001 12:36:31.897017    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:36:31.908224    4242 logs.go:276] 4 containers: [f312b9c9ac08 1242378878f5 5e5e58a930ac c3764113e7e4]
	I1001 12:36:31.908321    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:36:31.918578    4242 logs.go:276] 1 containers: [3430a5479e9c]
	I1001 12:36:31.918664    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:36:31.928959    4242 logs.go:276] 1 containers: [ae0380eb6ceb]
	I1001 12:36:31.929039    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:36:31.939159    4242 logs.go:276] 1 containers: [38b93891ecd6]
	I1001 12:36:31.939243    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:36:31.949259    4242 logs.go:276] 0 containers: []
	W1001 12:36:31.949270    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:36:31.949342    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:36:31.959597    4242 logs.go:276] 1 containers: [97631f54aa43]
	I1001 12:36:31.959617    4242 logs.go:123] Gathering logs for storage-provisioner [97631f54aa43] ...
	I1001 12:36:31.959623    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97631f54aa43"
	I1001 12:36:31.970983    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:36:31.970998    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:36:31.975711    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:36:31.975719    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:36:32.011770    4242 logs.go:123] Gathering logs for kube-apiserver [b4b0ba48f60b] ...
	I1001 12:36:32.011784    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b0ba48f60b"
	I1001 12:36:32.026365    4242 logs.go:123] Gathering logs for coredns [c3764113e7e4] ...
	I1001 12:36:32.026377    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3764113e7e4"
	I1001 12:36:32.038002    4242 logs.go:123] Gathering logs for kube-controller-manager [38b93891ecd6] ...
	I1001 12:36:32.038016    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38b93891ecd6"
	I1001 12:36:32.055685    4242 logs.go:123] Gathering logs for coredns [f312b9c9ac08] ...
	I1001 12:36:32.055699    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f312b9c9ac08"
	I1001 12:36:32.071871    4242 logs.go:123] Gathering logs for kube-scheduler [3430a5479e9c] ...
	I1001 12:36:32.071887    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3430a5479e9c"
	I1001 12:36:32.094891    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:36:32.094902    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:36:32.106708    4242 logs.go:123] Gathering logs for etcd [4fffcaa9e400] ...
	I1001 12:36:32.106724    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fffcaa9e400"
	I1001 12:36:32.120912    4242 logs.go:123] Gathering logs for coredns [1242378878f5] ...
	I1001 12:36:32.120922    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1242378878f5"
	I1001 12:36:32.132833    4242 logs.go:123] Gathering logs for coredns [5e5e58a930ac] ...
	I1001 12:36:32.132849    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5e58a930ac"
	I1001 12:36:32.144780    4242 logs.go:123] Gathering logs for kube-proxy [ae0380eb6ceb] ...
	I1001 12:36:32.144796    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae0380eb6ceb"
	I1001 12:36:32.156975    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:36:32.156990    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:36:32.182243    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:36:32.182252    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:36:34.721093    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:36:39.723372    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:36:39.723612    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:36:39.742555    4242 logs.go:276] 1 containers: [b4b0ba48f60b]
	I1001 12:36:39.742652    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:36:39.758067    4242 logs.go:276] 1 containers: [4fffcaa9e400]
	I1001 12:36:39.758143    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:36:39.773992    4242 logs.go:276] 4 containers: [f312b9c9ac08 1242378878f5 5e5e58a930ac c3764113e7e4]
	I1001 12:36:39.774087    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:36:39.784355    4242 logs.go:276] 1 containers: [3430a5479e9c]
	I1001 12:36:39.784441    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:36:39.795752    4242 logs.go:276] 1 containers: [ae0380eb6ceb]
	I1001 12:36:39.795829    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:36:39.806768    4242 logs.go:276] 1 containers: [38b93891ecd6]
	I1001 12:36:39.806854    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:36:39.817440    4242 logs.go:276] 0 containers: []
	W1001 12:36:39.817450    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:36:39.817516    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:36:39.827817    4242 logs.go:276] 1 containers: [97631f54aa43]
	I1001 12:36:39.827836    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:36:39.827841    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:36:39.840013    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:36:39.840023    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:36:39.863982    4242 logs.go:123] Gathering logs for coredns [5e5e58a930ac] ...
	I1001 12:36:39.863991    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5e58a930ac"
	I1001 12:36:39.875692    4242 logs.go:123] Gathering logs for coredns [c3764113e7e4] ...
	I1001 12:36:39.875705    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3764113e7e4"
	I1001 12:36:39.887485    4242 logs.go:123] Gathering logs for kube-proxy [ae0380eb6ceb] ...
	I1001 12:36:39.887495    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae0380eb6ceb"
	I1001 12:36:39.898950    4242 logs.go:123] Gathering logs for storage-provisioner [97631f54aa43] ...
	I1001 12:36:39.898964    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97631f54aa43"
	I1001 12:36:39.910179    4242 logs.go:123] Gathering logs for coredns [1242378878f5] ...
	I1001 12:36:39.910191    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1242378878f5"
	I1001 12:36:39.921669    4242 logs.go:123] Gathering logs for kube-scheduler [3430a5479e9c] ...
	I1001 12:36:39.921684    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3430a5479e9c"
	I1001 12:36:39.946834    4242 logs.go:123] Gathering logs for kube-controller-manager [38b93891ecd6] ...
	I1001 12:36:39.946846    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38b93891ecd6"
	I1001 12:36:39.971663    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:36:39.971674    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:36:40.008650    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:36:40.008659    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:36:40.013093    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:36:40.013100    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:36:40.047130    4242 logs.go:123] Gathering logs for etcd [4fffcaa9e400] ...
	I1001 12:36:40.047144    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fffcaa9e400"
	I1001 12:36:40.061586    4242 logs.go:123] Gathering logs for kube-apiserver [b4b0ba48f60b] ...
	I1001 12:36:40.061603    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b0ba48f60b"
	I1001 12:36:40.077409    4242 logs.go:123] Gathering logs for coredns [f312b9c9ac08] ...
	I1001 12:36:40.077420    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f312b9c9ac08"
	I1001 12:36:42.598164    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:36:47.600529    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:36:47.600712    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:36:47.614462    4242 logs.go:276] 1 containers: [b4b0ba48f60b]
	I1001 12:36:47.614560    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:36:47.625766    4242 logs.go:276] 1 containers: [4fffcaa9e400]
	I1001 12:36:47.625853    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:36:47.636611    4242 logs.go:276] 4 containers: [f312b9c9ac08 1242378878f5 5e5e58a930ac c3764113e7e4]
	I1001 12:36:47.636689    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:36:47.647402    4242 logs.go:276] 1 containers: [3430a5479e9c]
	I1001 12:36:47.647485    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:36:47.658363    4242 logs.go:276] 1 containers: [ae0380eb6ceb]
	I1001 12:36:47.658451    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:36:47.672905    4242 logs.go:276] 1 containers: [38b93891ecd6]
	I1001 12:36:47.672981    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:36:47.682736    4242 logs.go:276] 0 containers: []
	W1001 12:36:47.682747    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:36:47.682822    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:36:47.696908    4242 logs.go:276] 1 containers: [97631f54aa43]
	I1001 12:36:47.696924    4242 logs.go:123] Gathering logs for etcd [4fffcaa9e400] ...
	I1001 12:36:47.696929    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fffcaa9e400"
	I1001 12:36:47.711019    4242 logs.go:123] Gathering logs for kube-proxy [ae0380eb6ceb] ...
	I1001 12:36:47.711034    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae0380eb6ceb"
	I1001 12:36:47.722818    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:36:47.722835    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:36:47.758831    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:36:47.758841    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:36:47.781823    4242 logs.go:123] Gathering logs for kube-controller-manager [38b93891ecd6] ...
	I1001 12:36:47.781831    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38b93891ecd6"
	I1001 12:36:47.799347    4242 logs.go:123] Gathering logs for storage-provisioner [97631f54aa43] ...
	I1001 12:36:47.799359    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97631f54aa43"
	I1001 12:36:47.811678    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:36:47.811692    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:36:47.816692    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:36:47.816701    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:36:47.852490    4242 logs.go:123] Gathering logs for coredns [f312b9c9ac08] ...
	I1001 12:36:47.852501    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f312b9c9ac08"
	I1001 12:36:47.864742    4242 logs.go:123] Gathering logs for coredns [1242378878f5] ...
	I1001 12:36:47.864760    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1242378878f5"
	I1001 12:36:47.876205    4242 logs.go:123] Gathering logs for coredns [5e5e58a930ac] ...
	I1001 12:36:47.876217    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5e58a930ac"
	I1001 12:36:47.887868    4242 logs.go:123] Gathering logs for kube-scheduler [3430a5479e9c] ...
	I1001 12:36:47.887880    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3430a5479e9c"
	I1001 12:36:47.903402    4242 logs.go:123] Gathering logs for kube-apiserver [b4b0ba48f60b] ...
	I1001 12:36:47.903418    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b0ba48f60b"
	I1001 12:36:47.917854    4242 logs.go:123] Gathering logs for coredns [c3764113e7e4] ...
	I1001 12:36:47.917864    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3764113e7e4"
	I1001 12:36:47.929836    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:36:47.929847    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:36:50.444847    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:36:55.446945    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:36:55.447086    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:36:55.458994    4242 logs.go:276] 1 containers: [b4b0ba48f60b]
	I1001 12:36:55.459079    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:36:55.469900    4242 logs.go:276] 1 containers: [4fffcaa9e400]
	I1001 12:36:55.469987    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:36:55.485161    4242 logs.go:276] 4 containers: [f312b9c9ac08 1242378878f5 5e5e58a930ac c3764113e7e4]
	I1001 12:36:55.485251    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:36:55.495965    4242 logs.go:276] 1 containers: [3430a5479e9c]
	I1001 12:36:55.496047    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:36:55.506809    4242 logs.go:276] 1 containers: [ae0380eb6ceb]
	I1001 12:36:55.506899    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:36:55.517484    4242 logs.go:276] 1 containers: [38b93891ecd6]
	I1001 12:36:55.517572    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:36:55.530595    4242 logs.go:276] 0 containers: []
	W1001 12:36:55.530607    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:36:55.530679    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:36:55.541709    4242 logs.go:276] 1 containers: [97631f54aa43]
	I1001 12:36:55.541727    4242 logs.go:123] Gathering logs for kube-scheduler [3430a5479e9c] ...
	I1001 12:36:55.541733    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3430a5479e9c"
	I1001 12:36:55.558558    4242 logs.go:123] Gathering logs for kube-proxy [ae0380eb6ceb] ...
	I1001 12:36:55.558573    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae0380eb6ceb"
	I1001 12:36:55.576040    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:36:55.576053    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:36:55.602451    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:36:55.602462    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:36:55.607671    4242 logs.go:123] Gathering logs for kube-apiserver [b4b0ba48f60b] ...
	I1001 12:36:55.607678    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b0ba48f60b"
	I1001 12:36:55.622429    4242 logs.go:123] Gathering logs for etcd [4fffcaa9e400] ...
	I1001 12:36:55.622442    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fffcaa9e400"
	I1001 12:36:55.636942    4242 logs.go:123] Gathering logs for coredns [c3764113e7e4] ...
	I1001 12:36:55.636955    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3764113e7e4"
	I1001 12:36:55.649279    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:36:55.649290    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:36:55.685939    4242 logs.go:123] Gathering logs for storage-provisioner [97631f54aa43] ...
	I1001 12:36:55.685952    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97631f54aa43"
	I1001 12:36:55.698060    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:36:55.698073    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:36:55.711071    4242 logs.go:123] Gathering logs for coredns [f312b9c9ac08] ...
	I1001 12:36:55.711083    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f312b9c9ac08"
	I1001 12:36:55.723511    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:36:55.723523    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:36:55.759511    4242 logs.go:123] Gathering logs for coredns [1242378878f5] ...
	I1001 12:36:55.759523    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1242378878f5"
	I1001 12:36:55.777338    4242 logs.go:123] Gathering logs for coredns [5e5e58a930ac] ...
	I1001 12:36:55.777353    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5e58a930ac"
	I1001 12:36:55.789349    4242 logs.go:123] Gathering logs for kube-controller-manager [38b93891ecd6] ...
	I1001 12:36:55.789360    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38b93891ecd6"
	I1001 12:36:58.309556    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:37:03.311670    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:37:03.311802    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:37:03.324548    4242 logs.go:276] 1 containers: [b4b0ba48f60b]
	I1001 12:37:03.324630    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:37:03.335888    4242 logs.go:276] 1 containers: [4fffcaa9e400]
	I1001 12:37:03.335963    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:37:03.346579    4242 logs.go:276] 4 containers: [f312b9c9ac08 1242378878f5 5e5e58a930ac c3764113e7e4]
	I1001 12:37:03.346667    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:37:03.357413    4242 logs.go:276] 1 containers: [3430a5479e9c]
	I1001 12:37:03.357497    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:37:03.368073    4242 logs.go:276] 1 containers: [ae0380eb6ceb]
	I1001 12:37:03.368153    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:37:03.381143    4242 logs.go:276] 1 containers: [38b93891ecd6]
	I1001 12:37:03.381233    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:37:03.391302    4242 logs.go:276] 0 containers: []
	W1001 12:37:03.391317    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:37:03.391393    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:37:03.401604    4242 logs.go:276] 1 containers: [97631f54aa43]
	I1001 12:37:03.401620    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:37:03.401626    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:37:03.436757    4242 logs.go:123] Gathering logs for coredns [5e5e58a930ac] ...
	I1001 12:37:03.436770    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5e58a930ac"
	I1001 12:37:03.448171    4242 logs.go:123] Gathering logs for kube-proxy [ae0380eb6ceb] ...
	I1001 12:37:03.448186    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae0380eb6ceb"
	I1001 12:37:03.460076    4242 logs.go:123] Gathering logs for storage-provisioner [97631f54aa43] ...
	I1001 12:37:03.460087    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97631f54aa43"
	I1001 12:37:03.471932    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:37:03.471945    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:37:03.496957    4242 logs.go:123] Gathering logs for kube-scheduler [3430a5479e9c] ...
	I1001 12:37:03.496966    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3430a5479e9c"
	I1001 12:37:03.512509    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:37:03.512519    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:37:03.548732    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:37:03.548743    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:37:03.553164    4242 logs.go:123] Gathering logs for kube-apiserver [b4b0ba48f60b] ...
	I1001 12:37:03.553170    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b0ba48f60b"
	I1001 12:37:03.566921    4242 logs.go:123] Gathering logs for coredns [1242378878f5] ...
	I1001 12:37:03.566932    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1242378878f5"
	I1001 12:37:03.579139    4242 logs.go:123] Gathering logs for coredns [c3764113e7e4] ...
	I1001 12:37:03.579150    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3764113e7e4"
	I1001 12:37:03.594417    4242 logs.go:123] Gathering logs for kube-controller-manager [38b93891ecd6] ...
	I1001 12:37:03.594428    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38b93891ecd6"
	I1001 12:37:03.611609    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:37:03.611620    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:37:03.623851    4242 logs.go:123] Gathering logs for etcd [4fffcaa9e400] ...
	I1001 12:37:03.623862    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fffcaa9e400"
	I1001 12:37:03.638444    4242 logs.go:123] Gathering logs for coredns [f312b9c9ac08] ...
	I1001 12:37:03.638460    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f312b9c9ac08"
	I1001 12:37:06.151842    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:37:11.154058    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:37:11.154241    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:37:11.166857    4242 logs.go:276] 1 containers: [b4b0ba48f60b]
	I1001 12:37:11.166946    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:37:11.178222    4242 logs.go:276] 1 containers: [4fffcaa9e400]
	I1001 12:37:11.178306    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:37:11.189208    4242 logs.go:276] 4 containers: [f312b9c9ac08 1242378878f5 5e5e58a930ac c3764113e7e4]
	I1001 12:37:11.189294    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:37:11.199899    4242 logs.go:276] 1 containers: [3430a5479e9c]
	I1001 12:37:11.199984    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:37:11.210541    4242 logs.go:276] 1 containers: [ae0380eb6ceb]
	I1001 12:37:11.210620    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:37:11.221957    4242 logs.go:276] 1 containers: [38b93891ecd6]
	I1001 12:37:11.222041    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:37:11.231928    4242 logs.go:276] 0 containers: []
	W1001 12:37:11.231939    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:37:11.232017    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:37:11.242491    4242 logs.go:276] 1 containers: [97631f54aa43]
	I1001 12:37:11.242510    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:37:11.242515    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:37:11.247458    4242 logs.go:123] Gathering logs for kube-scheduler [3430a5479e9c] ...
	I1001 12:37:11.247466    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3430a5479e9c"
	I1001 12:37:11.263260    4242 logs.go:123] Gathering logs for kube-controller-manager [38b93891ecd6] ...
	I1001 12:37:11.263278    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38b93891ecd6"
	I1001 12:37:11.281690    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:37:11.281702    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:37:11.308657    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:37:11.308672    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:37:11.346313    4242 logs.go:123] Gathering logs for coredns [f312b9c9ac08] ...
	I1001 12:37:11.346326    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f312b9c9ac08"
	I1001 12:37:11.358079    4242 logs.go:123] Gathering logs for coredns [1242378878f5] ...
	I1001 12:37:11.358096    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1242378878f5"
	I1001 12:37:11.370965    4242 logs.go:123] Gathering logs for kube-apiserver [b4b0ba48f60b] ...
	I1001 12:37:11.370978    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b0ba48f60b"
	I1001 12:37:11.386227    4242 logs.go:123] Gathering logs for coredns [c3764113e7e4] ...
	I1001 12:37:11.386245    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3764113e7e4"
	I1001 12:37:11.397879    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:37:11.397890    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:37:11.409191    4242 logs.go:123] Gathering logs for storage-provisioner [97631f54aa43] ...
	I1001 12:37:11.409205    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97631f54aa43"
	I1001 12:37:11.421302    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:37:11.421313    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:37:11.456111    4242 logs.go:123] Gathering logs for etcd [4fffcaa9e400] ...
	I1001 12:37:11.456119    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fffcaa9e400"
	I1001 12:37:11.470741    4242 logs.go:123] Gathering logs for coredns [5e5e58a930ac] ...
	I1001 12:37:11.470753    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5e58a930ac"
	I1001 12:37:11.486007    4242 logs.go:123] Gathering logs for kube-proxy [ae0380eb6ceb] ...
	I1001 12:37:11.486019    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae0380eb6ceb"
	I1001 12:37:13.999777    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:37:19.002048    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:37:19.002191    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:37:19.014063    4242 logs.go:276] 1 containers: [b4b0ba48f60b]
	I1001 12:37:19.014140    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:37:19.024390    4242 logs.go:276] 1 containers: [4fffcaa9e400]
	I1001 12:37:19.024475    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:37:19.034655    4242 logs.go:276] 4 containers: [f312b9c9ac08 1242378878f5 5e5e58a930ac c3764113e7e4]
	I1001 12:37:19.034746    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:37:19.045243    4242 logs.go:276] 1 containers: [3430a5479e9c]
	I1001 12:37:19.045327    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:37:19.055862    4242 logs.go:276] 1 containers: [ae0380eb6ceb]
	I1001 12:37:19.055941    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:37:19.075511    4242 logs.go:276] 1 containers: [38b93891ecd6]
	I1001 12:37:19.075586    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:37:19.086031    4242 logs.go:276] 0 containers: []
	W1001 12:37:19.086043    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:37:19.086114    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:37:19.096551    4242 logs.go:276] 1 containers: [97631f54aa43]
	I1001 12:37:19.096568    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:37:19.096574    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:37:19.101143    4242 logs.go:123] Gathering logs for coredns [1242378878f5] ...
	I1001 12:37:19.101150    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1242378878f5"
	I1001 12:37:19.112170    4242 logs.go:123] Gathering logs for coredns [c3764113e7e4] ...
	I1001 12:37:19.112185    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3764113e7e4"
	I1001 12:37:19.124019    4242 logs.go:123] Gathering logs for kube-scheduler [3430a5479e9c] ...
	I1001 12:37:19.124028    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3430a5479e9c"
	I1001 12:37:19.141256    4242 logs.go:123] Gathering logs for kube-proxy [ae0380eb6ceb] ...
	I1001 12:37:19.141267    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae0380eb6ceb"
	I1001 12:37:19.152630    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:37:19.152643    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:37:19.176159    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:37:19.176169    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:37:19.211416    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:37:19.211425    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:37:19.247303    4242 logs.go:123] Gathering logs for etcd [4fffcaa9e400] ...
	I1001 12:37:19.247316    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fffcaa9e400"
	I1001 12:37:19.261487    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:37:19.261500    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:37:19.274164    4242 logs.go:123] Gathering logs for kube-apiserver [b4b0ba48f60b] ...
	I1001 12:37:19.274172    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b0ba48f60b"
	I1001 12:37:19.288330    4242 logs.go:123] Gathering logs for coredns [f312b9c9ac08] ...
	I1001 12:37:19.288347    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f312b9c9ac08"
	I1001 12:37:19.299685    4242 logs.go:123] Gathering logs for coredns [5e5e58a930ac] ...
	I1001 12:37:19.299696    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5e58a930ac"
	I1001 12:37:19.311261    4242 logs.go:123] Gathering logs for storage-provisioner [97631f54aa43] ...
	I1001 12:37:19.311270    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97631f54aa43"
	I1001 12:37:19.322945    4242 logs.go:123] Gathering logs for kube-controller-manager [38b93891ecd6] ...
	I1001 12:37:19.322957    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38b93891ecd6"
	I1001 12:37:21.842433    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:37:26.844601    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:37:26.844825    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:37:26.860678    4242 logs.go:276] 1 containers: [b4b0ba48f60b]
	I1001 12:37:26.860781    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:37:26.872320    4242 logs.go:276] 1 containers: [4fffcaa9e400]
	I1001 12:37:26.872410    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:37:26.883239    4242 logs.go:276] 4 containers: [f312b9c9ac08 1242378878f5 5e5e58a930ac c3764113e7e4]
	I1001 12:37:26.883331    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:37:26.893934    4242 logs.go:276] 1 containers: [3430a5479e9c]
	I1001 12:37:26.894020    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:37:26.904507    4242 logs.go:276] 1 containers: [ae0380eb6ceb]
	I1001 12:37:26.904587    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:37:26.915524    4242 logs.go:276] 1 containers: [38b93891ecd6]
	I1001 12:37:26.915610    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:37:26.927314    4242 logs.go:276] 0 containers: []
	W1001 12:37:26.927329    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:37:26.927404    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:37:26.938147    4242 logs.go:276] 1 containers: [97631f54aa43]
	I1001 12:37:26.938171    4242 logs.go:123] Gathering logs for kube-controller-manager [38b93891ecd6] ...
	I1001 12:37:26.938177    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38b93891ecd6"
	I1001 12:37:26.955573    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:37:26.955584    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:37:26.968909    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:37:26.968921    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:37:27.006960    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:37:27.006975    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:37:27.045407    4242 logs.go:123] Gathering logs for etcd [4fffcaa9e400] ...
	I1001 12:37:27.045418    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fffcaa9e400"
	I1001 12:37:27.059784    4242 logs.go:123] Gathering logs for coredns [c3764113e7e4] ...
	I1001 12:37:27.059796    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3764113e7e4"
	I1001 12:37:27.071458    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:37:27.071470    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:37:27.076014    4242 logs.go:123] Gathering logs for coredns [1242378878f5] ...
	I1001 12:37:27.076020    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1242378878f5"
	I1001 12:37:27.087871    4242 logs.go:123] Gathering logs for kube-scheduler [3430a5479e9c] ...
	I1001 12:37:27.087887    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3430a5479e9c"
	I1001 12:37:27.103340    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:37:27.103351    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:37:27.125752    4242 logs.go:123] Gathering logs for kube-apiserver [b4b0ba48f60b] ...
	I1001 12:37:27.125760    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b0ba48f60b"
	I1001 12:37:27.140152    4242 logs.go:123] Gathering logs for coredns [5e5e58a930ac] ...
	I1001 12:37:27.140168    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5e58a930ac"
	I1001 12:37:27.151935    4242 logs.go:123] Gathering logs for kube-proxy [ae0380eb6ceb] ...
	I1001 12:37:27.151947    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae0380eb6ceb"
	I1001 12:37:27.164063    4242 logs.go:123] Gathering logs for storage-provisioner [97631f54aa43] ...
	I1001 12:37:27.164080    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97631f54aa43"
	I1001 12:37:27.177568    4242 logs.go:123] Gathering logs for coredns [f312b9c9ac08] ...
	I1001 12:37:27.177580    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f312b9c9ac08"
	I1001 12:37:29.692069    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:37:34.694223    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:37:34.694452    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:37:34.712199    4242 logs.go:276] 1 containers: [b4b0ba48f60b]
	I1001 12:37:34.712301    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:37:34.726196    4242 logs.go:276] 1 containers: [4fffcaa9e400]
	I1001 12:37:34.726289    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:37:34.737910    4242 logs.go:276] 4 containers: [f312b9c9ac08 1242378878f5 5e5e58a930ac c3764113e7e4]
	I1001 12:37:34.737988    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:37:34.749715    4242 logs.go:276] 1 containers: [3430a5479e9c]
	I1001 12:37:34.749792    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:37:34.760398    4242 logs.go:276] 1 containers: [ae0380eb6ceb]
	I1001 12:37:34.760481    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:37:34.771557    4242 logs.go:276] 1 containers: [38b93891ecd6]
	I1001 12:37:34.771647    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:37:34.784813    4242 logs.go:276] 0 containers: []
	W1001 12:37:34.784828    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:37:34.784910    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:37:34.800757    4242 logs.go:276] 1 containers: [97631f54aa43]
	I1001 12:37:34.800775    4242 logs.go:123] Gathering logs for coredns [f312b9c9ac08] ...
	I1001 12:37:34.800782    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f312b9c9ac08"
	I1001 12:37:34.812456    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:37:34.812471    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:37:34.846874    4242 logs.go:123] Gathering logs for kube-apiserver [b4b0ba48f60b] ...
	I1001 12:37:34.846890    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b0ba48f60b"
	I1001 12:37:34.862592    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:37:34.862602    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:37:34.898774    4242 logs.go:123] Gathering logs for coredns [1242378878f5] ...
	I1001 12:37:34.898782    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1242378878f5"
	I1001 12:37:34.910766    4242 logs.go:123] Gathering logs for coredns [5e5e58a930ac] ...
	I1001 12:37:34.910777    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5e58a930ac"
	I1001 12:37:34.925480    4242 logs.go:123] Gathering logs for coredns [c3764113e7e4] ...
	I1001 12:37:34.925495    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3764113e7e4"
	I1001 12:37:34.937197    4242 logs.go:123] Gathering logs for kube-controller-manager [38b93891ecd6] ...
	I1001 12:37:34.937208    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38b93891ecd6"
	I1001 12:37:34.954424    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:37:34.954437    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:37:34.977706    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:37:34.977714    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:37:34.981710    4242 logs.go:123] Gathering logs for kube-scheduler [3430a5479e9c] ...
	I1001 12:37:34.981719    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3430a5479e9c"
	I1001 12:37:34.996882    4242 logs.go:123] Gathering logs for kube-proxy [ae0380eb6ceb] ...
	I1001 12:37:34.996894    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae0380eb6ceb"
	I1001 12:37:35.010211    4242 logs.go:123] Gathering logs for storage-provisioner [97631f54aa43] ...
	I1001 12:37:35.010222    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97631f54aa43"
	I1001 12:37:35.021664    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:37:35.021673    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:37:35.033714    4242 logs.go:123] Gathering logs for etcd [4fffcaa9e400] ...
	I1001 12:37:35.033726    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fffcaa9e400"
	I1001 12:37:37.549695    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:37:42.551909    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:37:42.555637    4242 out.go:201] 
	W1001 12:37:42.559317    4242 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1001 12:37:42.559333    4242 out.go:270] * 
	* 
	W1001 12:37:42.560204    4242 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 12:37:42.571369    4242 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-810000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:629: *** TestRunningBinaryUpgrade FAILED at 2024-10-01 12:37:42.669931 -0700 PDT m=+3093.543446460
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-810000 -n running-upgrade-810000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-810000 -n running-upgrade-810000: exit status 2 (15.597547834s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-810000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-155000          | force-systemd-flag-155000 | jenkins | v1.34.0 | 01 Oct 24 12:27 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-777000              | force-systemd-env-777000  | jenkins | v1.34.0 | 01 Oct 24 12:27 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-777000           | force-systemd-env-777000  | jenkins | v1.34.0 | 01 Oct 24 12:27 PDT | 01 Oct 24 12:27 PDT |
	| start   | -p docker-flags-780000                | docker-flags-780000       | jenkins | v1.34.0 | 01 Oct 24 12:27 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-155000             | force-systemd-flag-155000 | jenkins | v1.34.0 | 01 Oct 24 12:27 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-155000          | force-systemd-flag-155000 | jenkins | v1.34.0 | 01 Oct 24 12:27 PDT | 01 Oct 24 12:27 PDT |
	| start   | -p cert-expiration-211000             | cert-expiration-211000    | jenkins | v1.34.0 | 01 Oct 24 12:27 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-780000 ssh               | docker-flags-780000       | jenkins | v1.34.0 | 01 Oct 24 12:27 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-780000 ssh               | docker-flags-780000       | jenkins | v1.34.0 | 01 Oct 24 12:27 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-780000                | docker-flags-780000       | jenkins | v1.34.0 | 01 Oct 24 12:27 PDT | 01 Oct 24 12:27 PDT |
	| start   | -p cert-options-867000                | cert-options-867000       | jenkins | v1.34.0 | 01 Oct 24 12:27 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-867000 ssh               | cert-options-867000       | jenkins | v1.34.0 | 01 Oct 24 12:27 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-867000 -- sudo        | cert-options-867000       | jenkins | v1.34.0 | 01 Oct 24 12:27 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-867000                | cert-options-867000       | jenkins | v1.34.0 | 01 Oct 24 12:27 PDT | 01 Oct 24 12:27 PDT |
	| start   | -p running-upgrade-810000             | minikube                  | jenkins | v1.26.0 | 01 Oct 24 12:27 PDT | 01 Oct 24 12:29 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-810000             | running-upgrade-810000    | jenkins | v1.34.0 | 01 Oct 24 12:29 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-211000             | cert-expiration-211000    | jenkins | v1.34.0 | 01 Oct 24 12:30 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-211000             | cert-expiration-211000    | jenkins | v1.34.0 | 01 Oct 24 12:30 PDT | 01 Oct 24 12:30 PDT |
	| start   | -p kubernetes-upgrade-889000          | kubernetes-upgrade-889000 | jenkins | v1.34.0 | 01 Oct 24 12:30 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-889000          | kubernetes-upgrade-889000 | jenkins | v1.34.0 | 01 Oct 24 12:31 PDT | 01 Oct 24 12:31 PDT |
	| start   | -p kubernetes-upgrade-889000          | kubernetes-upgrade-889000 | jenkins | v1.34.0 | 01 Oct 24 12:31 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-889000          | kubernetes-upgrade-889000 | jenkins | v1.34.0 | 01 Oct 24 12:31 PDT | 01 Oct 24 12:31 PDT |
	| start   | -p stopped-upgrade-340000             | minikube                  | jenkins | v1.26.0 | 01 Oct 24 12:31 PDT | 01 Oct 24 12:32 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-340000 stop           | minikube                  | jenkins | v1.26.0 | 01 Oct 24 12:32 PDT | 01 Oct 24 12:32 PDT |
	| start   | -p stopped-upgrade-340000             | stopped-upgrade-340000    | jenkins | v1.34.0 | 01 Oct 24 12:32 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 12:32:17
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 12:32:17.249419    4721 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:32:17.249565    4721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:32:17.249569    4721 out.go:358] Setting ErrFile to fd 2...
	I1001 12:32:17.249572    4721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:32:17.249742    4721 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:32:17.250885    4721 out.go:352] Setting JSON to false
	I1001 12:32:17.269490    4721 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3702,"bootTime":1727807435,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1001 12:32:17.269574    4721 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 12:32:17.272199    4721 out.go:177] * [stopped-upgrade-340000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 12:32:17.280216    4721 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 12:32:17.280271    4721 notify.go:220] Checking for updates...
	I1001 12:32:17.287122    4721 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	I1001 12:32:17.290170    4721 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 12:32:17.293091    4721 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 12:32:17.296147    4721 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	I1001 12:32:17.299180    4721 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 12:32:17.302373    4721 config.go:182] Loaded profile config "stopped-upgrade-340000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1001 12:32:17.306126    4721 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1001 12:32:17.309203    4721 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 12:32:17.313093    4721 out.go:177] * Using the qemu2 driver based on existing profile
	I1001 12:32:17.320178    4721 start.go:297] selected driver: qemu2
	I1001 12:32:17.320184    4721 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-340000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50511 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-340000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1001 12:32:17.320245    4721 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 12:32:17.322859    4721 cni.go:84] Creating CNI manager for ""
	I1001 12:32:17.322891    4721 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 12:32:17.322918    4721 start.go:340] cluster config:
	{Name:stopped-upgrade-340000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50511 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-340000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1001 12:32:17.322970    4721 iso.go:125] acquiring lock: {Name:mk749d3a5db31c259cbd6465e91cf5073e7cc750 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:32:17.331177    4721 out.go:177] * Starting "stopped-upgrade-340000" primary control-plane node in "stopped-upgrade-340000" cluster
	I1001 12:32:17.334109    4721 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1001 12:32:17.334127    4721 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1001 12:32:17.334133    4721 cache.go:56] Caching tarball of preloaded images
	I1001 12:32:17.334176    4721 preload.go:172] Found /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 12:32:17.334182    4721 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1001 12:32:17.334231    4721 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/stopped-upgrade-340000/config.json ...
	I1001 12:32:17.334712    4721 start.go:360] acquireMachinesLock for stopped-upgrade-340000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:32:17.334747    4721 start.go:364] duration metric: took 27.291µs to acquireMachinesLock for "stopped-upgrade-340000"
	I1001 12:32:17.334757    4721 start.go:96] Skipping create...Using existing machine configuration
	I1001 12:32:17.334762    4721 fix.go:54] fixHost starting: 
	I1001 12:32:17.334881    4721 fix.go:112] recreateIfNeeded on stopped-upgrade-340000: state=Stopped err=<nil>
	W1001 12:32:17.334891    4721 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 12:32:17.338162    4721 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-340000" ...
	I1001 12:32:15.466024    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:32:15.466206    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:32:15.480053    4242 logs.go:276] 2 containers: [c470955dfaae fbe4eddea511]
	I1001 12:32:15.480138    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:32:15.493242    4242 logs.go:276] 2 containers: [5b9e36bfadf5 1262a7e4c19e]
	I1001 12:32:15.493330    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:32:15.503838    4242 logs.go:276] 1 containers: [f1ff198f5b54]
	I1001 12:32:15.503919    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:32:15.514517    4242 logs.go:276] 2 containers: [b0fc6eb4a300 8f22eeb55450]
	I1001 12:32:15.514605    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:32:15.525048    4242 logs.go:276] 1 containers: [696dee0aa95d]
	I1001 12:32:15.525137    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:32:15.535781    4242 logs.go:276] 2 containers: [13357b660e39 85f3a613a166]
	I1001 12:32:15.535864    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:32:15.546157    4242 logs.go:276] 0 containers: []
	W1001 12:32:15.546169    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:32:15.546241    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:32:15.556897    4242 logs.go:276] 2 containers: [992f91ff2f53 9729c1a1e22d]
	I1001 12:32:15.556917    4242 logs.go:123] Gathering logs for coredns [f1ff198f5b54] ...
	I1001 12:32:15.556926    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ff198f5b54"
	I1001 12:32:15.567893    4242 logs.go:123] Gathering logs for kube-scheduler [b0fc6eb4a300] ...
	I1001 12:32:15.567906    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fc6eb4a300"
	I1001 12:32:15.579540    4242 logs.go:123] Gathering logs for storage-provisioner [992f91ff2f53] ...
	I1001 12:32:15.579549    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992f91ff2f53"
	I1001 12:32:15.590995    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:32:15.591007    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:32:15.614330    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:32:15.614341    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:32:15.618964    4242 logs.go:123] Gathering logs for kube-apiserver [c470955dfaae] ...
	I1001 12:32:15.618971    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c470955dfaae"
	I1001 12:32:15.633654    4242 logs.go:123] Gathering logs for kube-controller-manager [85f3a613a166] ...
	I1001 12:32:15.633669    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85f3a613a166"
	I1001 12:32:15.646543    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:32:15.646555    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:32:15.687975    4242 logs.go:123] Gathering logs for etcd [5b9e36bfadf5] ...
	I1001 12:32:15.687988    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b9e36bfadf5"
	I1001 12:32:15.704301    4242 logs.go:123] Gathering logs for etcd [1262a7e4c19e] ...
	I1001 12:32:15.704315    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1262a7e4c19e"
	I1001 12:32:15.718792    4242 logs.go:123] Gathering logs for kube-scheduler [8f22eeb55450] ...
	I1001 12:32:15.718807    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f22eeb55450"
	I1001 12:32:15.735179    4242 logs.go:123] Gathering logs for kube-proxy [696dee0aa95d] ...
	I1001 12:32:15.735192    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 696dee0aa95d"
	I1001 12:32:15.746936    4242 logs.go:123] Gathering logs for kube-controller-manager [13357b660e39] ...
	I1001 12:32:15.746948    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13357b660e39"
	I1001 12:32:15.764187    4242 logs.go:123] Gathering logs for storage-provisioner [9729c1a1e22d] ...
	I1001 12:32:15.764197    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9729c1a1e22d"
	I1001 12:32:15.775355    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:32:15.775370    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:32:15.787054    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:32:15.787068    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:32:15.825245    4242 logs.go:123] Gathering logs for kube-apiserver [fbe4eddea511] ...
	I1001 12:32:15.825254    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe4eddea511"
	I1001 12:32:18.356378    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:32:17.345118    4721 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:32:17.345193    4721 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/stopped-upgrade-340000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/stopped-upgrade-340000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/stopped-upgrade-340000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50476-:22,hostfwd=tcp::50477-:2376,hostname=stopped-upgrade-340000 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/stopped-upgrade-340000/disk.qcow2
	I1001 12:32:17.394095    4721 main.go:141] libmachine: STDOUT: 
	I1001 12:32:17.394128    4721 main.go:141] libmachine: STDERR: 
	I1001 12:32:17.394134    4721 main.go:141] libmachine: Waiting for VM to start (ssh -p 50476 docker@127.0.0.1)...
	I1001 12:32:23.358780    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:32:23.358990    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:32:23.370733    4242 logs.go:276] 2 containers: [c470955dfaae fbe4eddea511]
	I1001 12:32:23.370825    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:32:23.386121    4242 logs.go:276] 2 containers: [5b9e36bfadf5 1262a7e4c19e]
	I1001 12:32:23.386211    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:32:23.400330    4242 logs.go:276] 1 containers: [f1ff198f5b54]
	I1001 12:32:23.400414    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:32:23.411125    4242 logs.go:276] 2 containers: [b0fc6eb4a300 8f22eeb55450]
	I1001 12:32:23.411210    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:32:23.421622    4242 logs.go:276] 1 containers: [696dee0aa95d]
	I1001 12:32:23.421703    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:32:23.432530    4242 logs.go:276] 2 containers: [13357b660e39 85f3a613a166]
	I1001 12:32:23.432614    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:32:23.444500    4242 logs.go:276] 0 containers: []
	W1001 12:32:23.444511    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:32:23.444581    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:32:23.455891    4242 logs.go:276] 2 containers: [992f91ff2f53 9729c1a1e22d]
	I1001 12:32:23.455910    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:32:23.455916    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:32:23.468029    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:32:23.468040    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:32:23.504062    4242 logs.go:123] Gathering logs for kube-controller-manager [13357b660e39] ...
	I1001 12:32:23.504072    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13357b660e39"
	I1001 12:32:23.521191    4242 logs.go:123] Gathering logs for coredns [f1ff198f5b54] ...
	I1001 12:32:23.521202    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ff198f5b54"
	I1001 12:32:23.533124    4242 logs.go:123] Gathering logs for storage-provisioner [9729c1a1e22d] ...
	I1001 12:32:23.533136    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9729c1a1e22d"
	I1001 12:32:23.544275    4242 logs.go:123] Gathering logs for kube-apiserver [fbe4eddea511] ...
	I1001 12:32:23.544287    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe4eddea511"
	I1001 12:32:23.563759    4242 logs.go:123] Gathering logs for etcd [5b9e36bfadf5] ...
	I1001 12:32:23.563774    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b9e36bfadf5"
	I1001 12:32:23.577667    4242 logs.go:123] Gathering logs for kube-controller-manager [85f3a613a166] ...
	I1001 12:32:23.577683    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85f3a613a166"
	I1001 12:32:23.590922    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:32:23.590934    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:32:23.615872    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:32:23.615880    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:32:23.623378    4242 logs.go:123] Gathering logs for kube-proxy [696dee0aa95d] ...
	I1001 12:32:23.623386    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 696dee0aa95d"
	I1001 12:32:23.635624    4242 logs.go:123] Gathering logs for etcd [1262a7e4c19e] ...
	I1001 12:32:23.635635    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1262a7e4c19e"
	I1001 12:32:23.650493    4242 logs.go:123] Gathering logs for kube-scheduler [b0fc6eb4a300] ...
	I1001 12:32:23.650503    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fc6eb4a300"
	I1001 12:32:23.662689    4242 logs.go:123] Gathering logs for kube-scheduler [8f22eeb55450] ...
	I1001 12:32:23.662701    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f22eeb55450"
	I1001 12:32:23.682797    4242 logs.go:123] Gathering logs for storage-provisioner [992f91ff2f53] ...
	I1001 12:32:23.682807    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992f91ff2f53"
	I1001 12:32:23.694958    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:32:23.694968    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:32:23.734989    4242 logs.go:123] Gathering logs for kube-apiserver [c470955dfaae] ...
	I1001 12:32:23.735006    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c470955dfaae"
	I1001 12:32:26.252176    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:32:31.253299    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:32:31.253424    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:32:31.264656    4242 logs.go:276] 2 containers: [c470955dfaae fbe4eddea511]
	I1001 12:32:31.264738    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:32:31.283156    4242 logs.go:276] 2 containers: [5b9e36bfadf5 1262a7e4c19e]
	I1001 12:32:31.283265    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:32:31.294322    4242 logs.go:276] 1 containers: [f1ff198f5b54]
	I1001 12:32:31.294407    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:32:31.305591    4242 logs.go:276] 2 containers: [b0fc6eb4a300 8f22eeb55450]
	I1001 12:32:31.305673    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:32:31.316532    4242 logs.go:276] 1 containers: [696dee0aa95d]
	I1001 12:32:31.316614    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:32:31.328155    4242 logs.go:276] 2 containers: [13357b660e39 85f3a613a166]
	I1001 12:32:31.328239    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:32:31.338454    4242 logs.go:276] 0 containers: []
	W1001 12:32:31.338467    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:32:31.338540    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:32:31.349325    4242 logs.go:276] 2 containers: [992f91ff2f53 9729c1a1e22d]
	I1001 12:32:31.349348    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:32:31.349354    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:32:31.354333    4242 logs.go:123] Gathering logs for etcd [5b9e36bfadf5] ...
	I1001 12:32:31.354341    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b9e36bfadf5"
	I1001 12:32:31.374490    4242 logs.go:123] Gathering logs for kube-scheduler [b0fc6eb4a300] ...
	I1001 12:32:31.374501    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fc6eb4a300"
	I1001 12:32:31.388786    4242 logs.go:123] Gathering logs for kube-controller-manager [13357b660e39] ...
	I1001 12:32:31.388797    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13357b660e39"
	I1001 12:32:31.407130    4242 logs.go:123] Gathering logs for storage-provisioner [9729c1a1e22d] ...
	I1001 12:32:31.407146    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9729c1a1e22d"
	I1001 12:32:31.418744    4242 logs.go:123] Gathering logs for kube-apiserver [c470955dfaae] ...
	I1001 12:32:31.418759    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c470955dfaae"
	I1001 12:32:31.433722    4242 logs.go:123] Gathering logs for etcd [1262a7e4c19e] ...
	I1001 12:32:31.433735    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1262a7e4c19e"
	I1001 12:32:31.450893    4242 logs.go:123] Gathering logs for coredns [f1ff198f5b54] ...
	I1001 12:32:31.450904    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ff198f5b54"
	I1001 12:32:31.462595    4242 logs.go:123] Gathering logs for kube-proxy [696dee0aa95d] ...
	I1001 12:32:31.462607    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 696dee0aa95d"
	I1001 12:32:31.481359    4242 logs.go:123] Gathering logs for kube-controller-manager [85f3a613a166] ...
	I1001 12:32:31.481371    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85f3a613a166"
	I1001 12:32:31.501382    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:32:31.501398    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:32:31.513713    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:32:31.513725    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:32:31.555197    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:32:31.555216    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:32:31.594741    4242 logs.go:123] Gathering logs for kube-apiserver [fbe4eddea511] ...
	I1001 12:32:31.594754    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe4eddea511"
	I1001 12:32:31.615723    4242 logs.go:123] Gathering logs for kube-scheduler [8f22eeb55450] ...
	I1001 12:32:31.615737    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f22eeb55450"
	I1001 12:32:31.636758    4242 logs.go:123] Gathering logs for storage-provisioner [992f91ff2f53] ...
	I1001 12:32:31.636771    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992f91ff2f53"
	I1001 12:32:31.648830    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:32:31.648842    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:32:34.176492    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:32:37.139627    4721 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/stopped-upgrade-340000/config.json ...
	I1001 12:32:37.140523    4721 machine.go:93] provisionDockerMachine start ...
	I1001 12:32:37.140740    4721 main.go:141] libmachine: Using SSH client type: native
	I1001 12:32:37.141289    4721 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102879c00] 0x10287c440 <nil>  [] 0s} localhost 50476 <nil> <nil>}
	I1001 12:32:37.141303    4721 main.go:141] libmachine: About to run SSH command:
	hostname
	I1001 12:32:37.219434    4721 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1001 12:32:37.219470    4721 buildroot.go:166] provisioning hostname "stopped-upgrade-340000"
	I1001 12:32:37.219624    4721 main.go:141] libmachine: Using SSH client type: native
	I1001 12:32:37.219854    4721 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102879c00] 0x10287c440 <nil>  [] 0s} localhost 50476 <nil> <nil>}
	I1001 12:32:37.219865    4721 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-340000 && echo "stopped-upgrade-340000" | sudo tee /etc/hostname
	I1001 12:32:39.179208    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:32:39.179686    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:32:39.212320    4242 logs.go:276] 2 containers: [c470955dfaae fbe4eddea511]
	I1001 12:32:39.212479    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:32:39.232698    4242 logs.go:276] 2 containers: [5b9e36bfadf5 1262a7e4c19e]
	I1001 12:32:39.232817    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:32:39.247631    4242 logs.go:276] 1 containers: [f1ff198f5b54]
	I1001 12:32:39.247726    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:32:39.260446    4242 logs.go:276] 2 containers: [b0fc6eb4a300 8f22eeb55450]
	I1001 12:32:39.260533    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:32:39.271192    4242 logs.go:276] 1 containers: [696dee0aa95d]
	I1001 12:32:39.271272    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:32:39.282373    4242 logs.go:276] 2 containers: [13357b660e39 85f3a613a166]
	I1001 12:32:39.282448    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:32:39.292661    4242 logs.go:276] 0 containers: []
	W1001 12:32:39.292672    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:32:39.292737    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:32:39.303302    4242 logs.go:276] 2 containers: [992f91ff2f53 9729c1a1e22d]
	I1001 12:32:39.303320    4242 logs.go:123] Gathering logs for kube-apiserver [c470955dfaae] ...
	I1001 12:32:39.303326    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c470955dfaae"
	I1001 12:32:39.319396    4242 logs.go:123] Gathering logs for etcd [5b9e36bfadf5] ...
	I1001 12:32:39.319406    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b9e36bfadf5"
	I1001 12:32:39.333094    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:32:39.333105    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:32:39.356762    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:32:39.356773    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:32:39.391639    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:32:39.391649    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:32:39.395753    4242 logs.go:123] Gathering logs for kube-apiserver [fbe4eddea511] ...
	I1001 12:32:39.395761    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe4eddea511"
	I1001 12:32:39.414726    4242 logs.go:123] Gathering logs for etcd [1262a7e4c19e] ...
	I1001 12:32:39.414740    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1262a7e4c19e"
	I1001 12:32:39.429911    4242 logs.go:123] Gathering logs for kube-controller-manager [13357b660e39] ...
	I1001 12:32:39.429922    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13357b660e39"
	I1001 12:32:39.452983    4242 logs.go:123] Gathering logs for storage-provisioner [9729c1a1e22d] ...
	I1001 12:32:39.452997    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9729c1a1e22d"
	I1001 12:32:39.464741    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:32:39.464752    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:32:39.477527    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:32:39.477543    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:32:39.515321    4242 logs.go:123] Gathering logs for kube-scheduler [b0fc6eb4a300] ...
	I1001 12:32:39.515335    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fc6eb4a300"
	I1001 12:32:39.527457    4242 logs.go:123] Gathering logs for kube-scheduler [8f22eeb55450] ...
	I1001 12:32:39.527468    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f22eeb55450"
	I1001 12:32:39.543835    4242 logs.go:123] Gathering logs for kube-proxy [696dee0aa95d] ...
	I1001 12:32:39.543846    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 696dee0aa95d"
	I1001 12:32:39.565106    4242 logs.go:123] Gathering logs for kube-controller-manager [85f3a613a166] ...
	I1001 12:32:39.565122    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85f3a613a166"
	I1001 12:32:39.579121    4242 logs.go:123] Gathering logs for coredns [f1ff198f5b54] ...
	I1001 12:32:39.579134    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ff198f5b54"
	I1001 12:32:39.591122    4242 logs.go:123] Gathering logs for storage-provisioner [992f91ff2f53] ...
	I1001 12:32:39.591135    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992f91ff2f53"
	I1001 12:32:37.285322    4721 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-340000
	
	I1001 12:32:37.285419    4721 main.go:141] libmachine: Using SSH client type: native
	I1001 12:32:37.285600    4721 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102879c00] 0x10287c440 <nil>  [] 0s} localhost 50476 <nil> <nil>}
	I1001 12:32:37.285611    4721 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-340000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-340000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-340000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 12:32:37.343358    4721 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 12:32:37.343373    4721 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19736-1073/.minikube CaCertPath:/Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19736-1073/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19736-1073/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19736-1073/.minikube}
	I1001 12:32:37.343383    4721 buildroot.go:174] setting up certificates
	I1001 12:32:37.343388    4721 provision.go:84] configureAuth start
	I1001 12:32:37.343397    4721 provision.go:143] copyHostCerts
	I1001 12:32:37.343489    4721 exec_runner.go:144] found /Users/jenkins/minikube-integration/19736-1073/.minikube/ca.pem, removing ...
	I1001 12:32:37.343496    4721 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19736-1073/.minikube/ca.pem
	I1001 12:32:37.343638    4721 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19736-1073/.minikube/ca.pem (1078 bytes)
	I1001 12:32:37.343852    4721 exec_runner.go:144] found /Users/jenkins/minikube-integration/19736-1073/.minikube/cert.pem, removing ...
	I1001 12:32:37.343856    4721 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19736-1073/.minikube/cert.pem
	I1001 12:32:37.343956    4721 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19736-1073/.minikube/cert.pem (1123 bytes)
	I1001 12:32:37.344544    4721 exec_runner.go:144] found /Users/jenkins/minikube-integration/19736-1073/.minikube/key.pem, removing ...
	I1001 12:32:37.344550    4721 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19736-1073/.minikube/key.pem
	I1001 12:32:37.344626    4721 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19736-1073/.minikube/key.pem (1675 bytes)
	I1001 12:32:37.344742    4721 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-340000 san=[127.0.0.1 localhost minikube stopped-upgrade-340000]
	I1001 12:32:37.422459    4721 provision.go:177] copyRemoteCerts
	I1001 12:32:37.422493    4721 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 12:32:37.422501    4721 sshutil.go:53] new ssh client: &{IP:localhost Port:50476 SSHKeyPath:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/stopped-upgrade-340000/id_rsa Username:docker}
	I1001 12:32:37.452283    4721 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1001 12:32:37.459451    4721 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1001 12:32:37.465789    4721 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1001 12:32:37.472931    4721 provision.go:87] duration metric: took 129.536708ms to configureAuth
	I1001 12:32:37.472941    4721 buildroot.go:189] setting minikube options for container-runtime
	I1001 12:32:37.473037    4721 config.go:182] Loaded profile config "stopped-upgrade-340000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1001 12:32:37.473081    4721 main.go:141] libmachine: Using SSH client type: native
	I1001 12:32:37.473171    4721 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102879c00] 0x10287c440 <nil>  [] 0s} localhost 50476 <nil> <nil>}
	I1001 12:32:37.473176    4721 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1001 12:32:37.522043    4721 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1001 12:32:37.522053    4721 buildroot.go:70] root file system type: tmpfs
	I1001 12:32:37.522098    4721 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1001 12:32:37.522167    4721 main.go:141] libmachine: Using SSH client type: native
	I1001 12:32:37.522271    4721 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102879c00] 0x10287c440 <nil>  [] 0s} localhost 50476 <nil> <nil>}
	I1001 12:32:37.522304    4721 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1001 12:32:37.574454    4721 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1001 12:32:37.574515    4721 main.go:141] libmachine: Using SSH client type: native
	I1001 12:32:37.574622    4721 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102879c00] 0x10287c440 <nil>  [] 0s} localhost 50476 <nil> <nil>}
	I1001 12:32:37.574633    4721 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1001 12:32:37.919367    4721 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1001 12:32:37.919383    4721 machine.go:96] duration metric: took 778.864334ms to provisionDockerMachine
	I1001 12:32:37.919389    4721 start.go:293] postStartSetup for "stopped-upgrade-340000" (driver="qemu2")
	I1001 12:32:37.919396    4721 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 12:32:37.919465    4721 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 12:32:37.919475    4721 sshutil.go:53] new ssh client: &{IP:localhost Port:50476 SSHKeyPath:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/stopped-upgrade-340000/id_rsa Username:docker}
	I1001 12:32:37.945335    4721 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 12:32:37.946510    4721 info.go:137] Remote host: Buildroot 2021.02.12
	I1001 12:32:37.946518    4721 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19736-1073/.minikube/addons for local assets ...
	I1001 12:32:37.946818    4721 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19736-1073/.minikube/files for local assets ...
	I1001 12:32:37.946967    4721 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19736-1073/.minikube/files/etc/ssl/certs/15952.pem -> 15952.pem in /etc/ssl/certs
	I1001 12:32:37.947109    4721 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 12:32:37.949838    4721 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/files/etc/ssl/certs/15952.pem --> /etc/ssl/certs/15952.pem (1708 bytes)
	I1001 12:32:37.956823    4721 start.go:296] duration metric: took 37.429042ms for postStartSetup
	I1001 12:32:37.956837    4721 fix.go:56] duration metric: took 20.622502125s for fixHost
	I1001 12:32:37.956878    4721 main.go:141] libmachine: Using SSH client type: native
	I1001 12:32:37.956984    4721 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102879c00] 0x10287c440 <nil>  [] 0s} localhost 50476 <nil> <nil>}
	I1001 12:32:37.956989    4721 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 12:32:38.006188    4721 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727811157.865323046
	
	I1001 12:32:38.006195    4721 fix.go:216] guest clock: 1727811157.865323046
	I1001 12:32:38.006199    4721 fix.go:229] Guest: 2024-10-01 12:32:37.865323046 -0700 PDT Remote: 2024-10-01 12:32:37.956839 -0700 PDT m=+20.737616168 (delta=-91.515954ms)
	I1001 12:32:38.006209    4721 fix.go:200] guest clock delta is within tolerance: -91.515954ms
	I1001 12:32:38.006212    4721 start.go:83] releasing machines lock for "stopped-upgrade-340000", held for 20.671887041s
	I1001 12:32:38.006278    4721 ssh_runner.go:195] Run: cat /version.json
	I1001 12:32:38.006288    4721 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 12:32:38.006287    4721 sshutil.go:53] new ssh client: &{IP:localhost Port:50476 SSHKeyPath:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/stopped-upgrade-340000/id_rsa Username:docker}
	I1001 12:32:38.006303    4721 sshutil.go:53] new ssh client: &{IP:localhost Port:50476 SSHKeyPath:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/stopped-upgrade-340000/id_rsa Username:docker}
	W1001 12:32:38.006886    4721 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50476: connect: connection refused
	I1001 12:32:38.006904    4721 retry.go:31] will retry after 215.635445ms: dial tcp [::1]:50476: connect: connection refused
	W1001 12:32:38.030919    4721 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1001 12:32:38.030959    4721 ssh_runner.go:195] Run: systemctl --version
	I1001 12:32:38.032776    4721 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 12:32:38.034443    4721 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 12:32:38.034471    4721 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1001 12:32:38.037433    4721 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1001 12:32:38.042189    4721 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1001 12:32:38.042199    4721 start.go:495] detecting cgroup driver to use...
	I1001 12:32:38.042276    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 12:32:38.049237    4721 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1001 12:32:38.052583    4721 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1001 12:32:38.055664    4721 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1001 12:32:38.055692    4721 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1001 12:32:38.058609    4721 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1001 12:32:38.061603    4721 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1001 12:32:38.064978    4721 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1001 12:32:38.068463    4721 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 12:32:38.071651    4721 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1001 12:32:38.074421    4721 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1001 12:32:38.077610    4721 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1001 12:32:38.080904    4721 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 12:32:38.083558    4721 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 12:32:38.086243    4721 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 12:32:38.175252    4721 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1001 12:32:38.181501    4721 start.go:495] detecting cgroup driver to use...
	I1001 12:32:38.181571    4721 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1001 12:32:38.187683    4721 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 12:32:38.192667    4721 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 12:32:38.200444    4721 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 12:32:38.205324    4721 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1001 12:32:38.210398    4721 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1001 12:32:38.262167    4721 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1001 12:32:38.296547    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 12:32:38.309076    4721 ssh_runner.go:195] Run: which cri-dockerd
	I1001 12:32:38.310796    4721 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1001 12:32:38.314043    4721 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1001 12:32:38.319653    4721 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1001 12:32:38.398087    4721 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1001 12:32:38.462381    4721 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1001 12:32:38.462443    4721 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1001 12:32:38.467504    4721 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 12:32:38.546971    4721 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1001 12:32:39.673026    4721 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.126062292s)
	I1001 12:32:39.673092    4721 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1001 12:32:39.677335    4721 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1001 12:32:39.684002    4721 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1001 12:32:39.688561    4721 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1001 12:32:39.765064    4721 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1001 12:32:39.830042    4721 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 12:32:39.911068    4721 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1001 12:32:39.916433    4721 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1001 12:32:39.920672    4721 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 12:32:40.012963    4721 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1001 12:32:40.052465    4721 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1001 12:32:40.052558    4721 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1001 12:32:40.055292    4721 start.go:563] Will wait 60s for crictl version
	I1001 12:32:40.055355    4721 ssh_runner.go:195] Run: which crictl
	I1001 12:32:40.056713    4721 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 12:32:40.070933    4721 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1001 12:32:40.071019    4721 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1001 12:32:40.087067    4721 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1001 12:32:40.102693    4721 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1001 12:32:40.102844    4721 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1001 12:32:40.104095    4721 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 12:32:40.107642    4721 kubeadm.go:883] updating cluster {Name:stopped-upgrade-340000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50511 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-340000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1001 12:32:40.107687    4721 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1001 12:32:40.107742    4721 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1001 12:32:40.118384    4721 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1001 12:32:40.118391    4721 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1001 12:32:40.118439    4721 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1001 12:32:40.121683    4721 ssh_runner.go:195] Run: which lz4
	I1001 12:32:40.123039    4721 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1001 12:32:40.124227    4721 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1001 12:32:40.124236    4721 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1001 12:32:41.044752    4721 docker.go:649] duration metric: took 921.768334ms to copy over tarball
	I1001 12:32:41.044823    4721 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1001 12:32:42.206900    4721 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.162089375s)
	I1001 12:32:42.206914    4721 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1001 12:32:42.222464    4721 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1001 12:32:42.225710    4721 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1001 12:32:42.230932    4721 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 12:32:42.103343    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:32:42.311899    4721 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1001 12:32:44.007076    4721 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.6951955s)
	I1001 12:32:44.007196    4721 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1001 12:32:44.019804    4721 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1001 12:32:44.019817    4721 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1001 12:32:44.019823    4721 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1001 12:32:44.023647    4721 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1001 12:32:44.025363    4721 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 12:32:44.027322    4721 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1001 12:32:44.027721    4721 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1001 12:32:44.029853    4721 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1001 12:32:44.029937    4721 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 12:32:44.031730    4721 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1001 12:32:44.031845    4721 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1001 12:32:44.033353    4721 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1001 12:32:44.034059    4721 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1001 12:32:44.034539    4721 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1001 12:32:44.034738    4721 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1001 12:32:44.035578    4721 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1001 12:32:44.035641    4721 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1001 12:32:44.036547    4721 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1001 12:32:44.037106    4721 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1001 12:32:45.939441    4721 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1001 12:32:45.981489    4721 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1001 12:32:45.981543    4721 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1001 12:32:45.981679    4721 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1001 12:32:46.002062    4721 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1001 12:32:46.032580    4721 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1001 12:32:46.048850    4721 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1001 12:32:46.048884    4721 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1001 12:32:46.048967    4721 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1001 12:32:46.061925    4721 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1001 12:32:46.068122    4721 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1001 12:32:46.080009    4721 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1001 12:32:46.080031    4721 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1001 12:32:46.080103    4721 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1001 12:32:46.089346    4721 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1001 12:32:46.091087    4721 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1001 12:32:46.099990    4721 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1001 12:32:46.100008    4721 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1001 12:32:46.100077    4721 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1001 12:32:46.111302    4721 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	W1001 12:32:46.387348    4721 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1001 12:32:46.387552    4721 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 12:32:46.401893    4721 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1001 12:32:46.401922    4721 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 12:32:46.402006    4721 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 12:32:46.416483    4721 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1001 12:32:46.416616    4721 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1001 12:32:46.418431    4721 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1001 12:32:46.418442    4721 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1001 12:32:46.445508    4721 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1001 12:32:46.445525    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	W1001 12:32:46.603634    4721 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1001 12:32:46.603776    4721 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1001 12:32:46.605184    4721 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1001 12:32:46.607689    4721 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1001 12:32:46.696241    4721 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1001 12:32:46.696284    4721 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1001 12:32:46.696303    4721 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1001 12:32:46.696341    4721 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1001 12:32:46.696351    4721 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1001 12:32:46.696353    4721 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1001 12:32:46.696364    4721 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1001 12:32:46.696372    4721 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1001 12:32:46.696389    4721 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1001 12:32:46.696400    4721 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1001 12:32:46.716320    4721 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1001 12:32:46.716463    4721 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1001 12:32:46.719814    4721 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1001 12:32:46.719825    4721 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1001 12:32:46.719843    4721 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1001 12:32:46.719851    4721 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1001 12:32:46.719918    4721 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1001 12:32:46.723674    4721 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1001 12:32:46.723696    4721 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1001 12:32:46.735613    4721 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1001 12:32:46.735626    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1001 12:32:46.783615    4721 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I1001 12:32:46.783644    4721 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1001 12:32:46.783651    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1001 12:32:46.821623    4721 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1001 12:32:46.821664    4721 cache_images.go:92] duration metric: took 2.80189325s to LoadCachedImages
	W1001 12:32:46.821707    4721 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0: no such file or directory
	I1001 12:32:46.821713    4721 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1001 12:32:46.821772    4721 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-340000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-340000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 12:32:46.821862    4721 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1001 12:32:46.834571    4721 cni.go:84] Creating CNI manager for ""
	I1001 12:32:46.834585    4721 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 12:32:46.834593    4721 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1001 12:32:46.834603    4721 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-340000 NodeName:stopped-upgrade-340000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1001 12:32:46.834676    4721 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-340000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1001 12:32:46.834736    4721 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1001 12:32:46.837512    4721 binaries.go:44] Found k8s binaries, skipping transfer
	I1001 12:32:46.837547    4721 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1001 12:32:46.840587    4721 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1001 12:32:46.845672    4721 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 12:32:46.850676    4721 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1001 12:32:46.855818    4721 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1001 12:32:46.857007    4721 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 12:32:46.861001    4721 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 12:32:46.938753    4721 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 12:32:46.944028    4721 certs.go:68] Setting up /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/stopped-upgrade-340000 for IP: 10.0.2.15
	I1001 12:32:46.944041    4721 certs.go:194] generating shared ca certs ...
	I1001 12:32:46.944050    4721 certs.go:226] acquiring lock for ca certs: {Name:mk17296519b35110345119718efed98a68b82ed2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 12:32:46.944213    4721 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19736-1073/.minikube/ca.key
	I1001 12:32:46.944265    4721 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19736-1073/.minikube/proxy-client-ca.key
	I1001 12:32:46.944274    4721 certs.go:256] generating profile certs ...
	I1001 12:32:46.944348    4721 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/stopped-upgrade-340000/client.key
	I1001 12:32:46.944367    4721 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/stopped-upgrade-340000/apiserver.key.0d9cfbc7
	I1001 12:32:46.944379    4721 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/stopped-upgrade-340000/apiserver.crt.0d9cfbc7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1001 12:32:47.041919    4721 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/stopped-upgrade-340000/apiserver.crt.0d9cfbc7 ...
	I1001 12:32:47.041930    4721 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/stopped-upgrade-340000/apiserver.crt.0d9cfbc7: {Name:mk42a3009433a7b67664e87e44a566f172d07094 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 12:32:47.049953    4721 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/stopped-upgrade-340000/apiserver.key.0d9cfbc7 ...
	I1001 12:32:47.049960    4721 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/stopped-upgrade-340000/apiserver.key.0d9cfbc7: {Name:mka5194fa90f8ab5483c5dfcbae6295edf488a16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 12:32:47.050129    4721 certs.go:381] copying /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/stopped-upgrade-340000/apiserver.crt.0d9cfbc7 -> /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/stopped-upgrade-340000/apiserver.crt
	I1001 12:32:47.052341    4721 certs.go:385] copying /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/stopped-upgrade-340000/apiserver.key.0d9cfbc7 -> /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/stopped-upgrade-340000/apiserver.key
	I1001 12:32:47.052507    4721 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/stopped-upgrade-340000/proxy-client.key
	I1001 12:32:47.052641    4721 certs.go:484] found cert: /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/1595.pem (1338 bytes)
	W1001 12:32:47.052671    4721 certs.go:480] ignoring /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/1595_empty.pem, impossibly tiny 0 bytes
	I1001 12:32:47.052677    4721 certs.go:484] found cert: /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca-key.pem (1675 bytes)
	I1001 12:32:47.052703    4721 certs.go:484] found cert: /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem (1078 bytes)
	I1001 12:32:47.052728    4721 certs.go:484] found cert: /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/cert.pem (1123 bytes)
	I1001 12:32:47.052756    4721 certs.go:484] found cert: /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/key.pem (1675 bytes)
	I1001 12:32:47.052807    4721 certs.go:484] found cert: /Users/jenkins/minikube-integration/19736-1073/.minikube/files/etc/ssl/certs/15952.pem (1708 bytes)
	I1001 12:32:47.053184    4721 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 12:32:47.059842    4721 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 12:32:47.066103    4721 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 12:32:47.073046    4721 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1001 12:32:47.080530    4721 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/stopped-upgrade-340000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1001 12:32:47.087708    4721 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/stopped-upgrade-340000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1001 12:32:47.094529    4721 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/stopped-upgrade-340000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 12:32:47.101362    4721 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/stopped-upgrade-340000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1001 12:32:47.109320    4721 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/1595.pem --> /usr/share/ca-certificates/1595.pem (1338 bytes)
	I1001 12:32:47.117011    4721 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/files/etc/ssl/certs/15952.pem --> /usr/share/ca-certificates/15952.pem (1708 bytes)
	I1001 12:32:47.124679    4721 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 12:32:47.132547    4721 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1001 12:32:47.138157    4721 ssh_runner.go:195] Run: openssl version
	I1001 12:32:47.140320    4721 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1595.pem && ln -fs /usr/share/ca-certificates/1595.pem /etc/ssl/certs/1595.pem"
	I1001 12:32:47.143784    4721 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1595.pem
	I1001 12:32:47.145418    4721 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 19:02 /usr/share/ca-certificates/1595.pem
	I1001 12:32:47.145449    4721 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1595.pem
	I1001 12:32:47.147488    4721 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1595.pem /etc/ssl/certs/51391683.0"
	I1001 12:32:47.150754    4721 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15952.pem && ln -fs /usr/share/ca-certificates/15952.pem /etc/ssl/certs/15952.pem"
	I1001 12:32:47.154070    4721 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15952.pem
	I1001 12:32:47.155689    4721 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 19:02 /usr/share/ca-certificates/15952.pem
	I1001 12:32:47.155720    4721 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15952.pem
	I1001 12:32:47.157723    4721 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15952.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 12:32:47.161292    4721 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 12:32:47.164996    4721 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 12:32:47.166739    4721 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 18:47 /usr/share/ca-certificates/minikubeCA.pem
	I1001 12:32:47.166772    4721 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 12:32:47.168686    4721 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 12:32:47.172185    4721 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 12:32:47.173697    4721 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1001 12:32:47.175829    4721 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1001 12:32:47.177820    4721 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1001 12:32:47.179941    4721 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1001 12:32:47.181945    4721 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1001 12:32:47.184047    4721 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1001 12:32:47.186059    4721 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-340000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50511 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-340000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1001 12:32:47.186133    4721 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1001 12:32:47.199252    4721 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1001 12:32:47.202505    4721 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1001 12:32:47.202513    4721 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1001 12:32:47.202561    4721 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1001 12:32:47.206272    4721 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1001 12:32:47.206593    4721 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-340000" does not appear in /Users/jenkins/minikube-integration/19736-1073/kubeconfig
	I1001 12:32:47.206708    4721 kubeconfig.go:62] /Users/jenkins/minikube-integration/19736-1073/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-340000" cluster setting kubeconfig missing "stopped-upgrade-340000" context setting]
	I1001 12:32:47.206923    4721 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19736-1073/kubeconfig: {Name:mkdfe60702c76fe804796a27b08676f2ebb5427f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 12:32:47.207388    4721 kapi.go:59] client config for stopped-upgrade-340000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/stopped-upgrade-340000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/stopped-upgrade-340000/client.key", CAFile:"/Users/jenkins/minikube-integration/19736-1073/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103e525d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1001 12:32:47.207749    4721 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1001 12:32:47.211050    4721 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-340000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1001 12:32:47.211057    4721 kubeadm.go:1160] stopping kube-system containers ...
	I1001 12:32:47.211119    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1001 12:32:47.224939    4721 docker.go:483] Stopping containers: [d9956cf09477 ccd8354deb5e 7ad38fcc33d6 e0f6b93f81e7 316e5a1a5aed 64bb71576196 bc78f59fb2e5 4d8a8c79d4fe]
	I1001 12:32:47.225030    4721 ssh_runner.go:195] Run: docker stop d9956cf09477 ccd8354deb5e 7ad38fcc33d6 e0f6b93f81e7 316e5a1a5aed 64bb71576196 bc78f59fb2e5 4d8a8c79d4fe
	I1001 12:32:47.236319    4721 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1001 12:32:47.242097    4721 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 12:32:47.245224    4721 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 12:32:47.245233    4721 kubeadm.go:157] found existing configuration files:
	
	I1001 12:32:47.245274    4721 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/admin.conf
	I1001 12:32:47.248355    4721 kubeadm.go:163] "https://control-plane.minikube.internal:50511" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 12:32:47.248387    4721 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 12:32:47.105555    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:32:47.105721    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:32:47.127079    4242 logs.go:276] 2 containers: [c470955dfaae fbe4eddea511]
	I1001 12:32:47.127148    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:32:47.138384    4242 logs.go:276] 2 containers: [5b9e36bfadf5 1262a7e4c19e]
	I1001 12:32:47.138444    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:32:47.149937    4242 logs.go:276] 1 containers: [f1ff198f5b54]
	I1001 12:32:47.150010    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:32:47.161606    4242 logs.go:276] 2 containers: [b0fc6eb4a300 8f22eeb55450]
	I1001 12:32:47.161665    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:32:47.173418    4242 logs.go:276] 1 containers: [696dee0aa95d]
	I1001 12:32:47.173482    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:32:47.185114    4242 logs.go:276] 2 containers: [13357b660e39 85f3a613a166]
	I1001 12:32:47.185176    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:32:47.196305    4242 logs.go:276] 0 containers: []
	W1001 12:32:47.196315    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:32:47.196382    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:32:47.207805    4242 logs.go:276] 2 containers: [992f91ff2f53 9729c1a1e22d]
	I1001 12:32:47.207820    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:32:47.207826    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:32:47.253638    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:32:47.253648    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:32:47.258352    4242 logs.go:123] Gathering logs for kube-apiserver [c470955dfaae] ...
	I1001 12:32:47.258361    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c470955dfaae"
	I1001 12:32:47.273264    4242 logs.go:123] Gathering logs for kube-controller-manager [85f3a613a166] ...
	I1001 12:32:47.273274    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85f3a613a166"
	I1001 12:32:47.287130    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:32:47.287146    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:32:47.299955    4242 logs.go:123] Gathering logs for storage-provisioner [992f91ff2f53] ...
	I1001 12:32:47.299963    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992f91ff2f53"
	I1001 12:32:47.312504    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:32:47.312517    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:32:47.348858    4242 logs.go:123] Gathering logs for kube-apiserver [fbe4eddea511] ...
	I1001 12:32:47.348875    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe4eddea511"
	I1001 12:32:47.370349    4242 logs.go:123] Gathering logs for etcd [5b9e36bfadf5] ...
	I1001 12:32:47.370365    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b9e36bfadf5"
	I1001 12:32:47.385655    4242 logs.go:123] Gathering logs for kube-scheduler [b0fc6eb4a300] ...
	I1001 12:32:47.385668    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fc6eb4a300"
	I1001 12:32:47.397695    4242 logs.go:123] Gathering logs for kube-scheduler [8f22eeb55450] ...
	I1001 12:32:47.397706    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f22eeb55450"
	I1001 12:32:47.414533    4242 logs.go:123] Gathering logs for kube-proxy [696dee0aa95d] ...
	I1001 12:32:47.414551    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 696dee0aa95d"
	I1001 12:32:47.426837    4242 logs.go:123] Gathering logs for kube-controller-manager [13357b660e39] ...
	I1001 12:32:47.426852    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13357b660e39"
	I1001 12:32:47.446744    4242 logs.go:123] Gathering logs for etcd [1262a7e4c19e] ...
	I1001 12:32:47.446760    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1262a7e4c19e"
	I1001 12:32:47.461930    4242 logs.go:123] Gathering logs for coredns [f1ff198f5b54] ...
	I1001 12:32:47.461947    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ff198f5b54"
	I1001 12:32:47.474274    4242 logs.go:123] Gathering logs for storage-provisioner [9729c1a1e22d] ...
	I1001 12:32:47.474286    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9729c1a1e22d"
	I1001 12:32:47.486064    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:32:47.486076    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:32:47.251637    4721 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/kubelet.conf
	I1001 12:32:47.254124    4721 kubeadm.go:163] "https://control-plane.minikube.internal:50511" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 12:32:47.254159    4721 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 12:32:47.257162    4721 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/controller-manager.conf
	I1001 12:32:47.260672    4721 kubeadm.go:163] "https://control-plane.minikube.internal:50511" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 12:32:47.260724    4721 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 12:32:47.263818    4721 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/scheduler.conf
	I1001 12:32:47.266489    4721 kubeadm.go:163] "https://control-plane.minikube.internal:50511" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 12:32:47.266536    4721 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 12:32:47.269420    4721 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 12:32:47.273021    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 12:32:47.298584    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 12:32:47.762104    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1001 12:32:47.904728    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 12:32:47.926625    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1001 12:32:47.954509    4721 api_server.go:52] waiting for apiserver process to appear ...
	I1001 12:32:47.954601    4721 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 12:32:48.456680    4721 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 12:32:48.956660    4721 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 12:32:48.960775    4721 api_server.go:72] duration metric: took 1.006288584s to wait for apiserver process to appear ...
	I1001 12:32:48.960785    4721 api_server.go:88] waiting for apiserver healthz status ...
	I1001 12:32:48.960799    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:32:50.011845    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:32:53.962914    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:32:53.963008    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:32:55.014265    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:32:55.014814    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:32:55.055174    4242 logs.go:276] 2 containers: [c470955dfaae fbe4eddea511]
	I1001 12:32:55.055359    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:32:55.076737    4242 logs.go:276] 2 containers: [5b9e36bfadf5 1262a7e4c19e]
	I1001 12:32:55.076879    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:32:55.091627    4242 logs.go:276] 1 containers: [f1ff198f5b54]
	I1001 12:32:55.091720    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:32:55.104520    4242 logs.go:276] 2 containers: [b0fc6eb4a300 8f22eeb55450]
	I1001 12:32:55.104609    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:32:55.119528    4242 logs.go:276] 1 containers: [696dee0aa95d]
	I1001 12:32:55.119615    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:32:55.130516    4242 logs.go:276] 2 containers: [13357b660e39 85f3a613a166]
	I1001 12:32:55.130595    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:32:55.140077    4242 logs.go:276] 0 containers: []
	W1001 12:32:55.140088    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:32:55.140158    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:32:55.151457    4242 logs.go:276] 2 containers: [992f91ff2f53 9729c1a1e22d]
	I1001 12:32:55.151479    4242 logs.go:123] Gathering logs for etcd [5b9e36bfadf5] ...
	I1001 12:32:55.151485    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b9e36bfadf5"
	I1001 12:32:55.166159    4242 logs.go:123] Gathering logs for kube-scheduler [b0fc6eb4a300] ...
	I1001 12:32:55.166170    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fc6eb4a300"
	I1001 12:32:55.178458    4242 logs.go:123] Gathering logs for kube-controller-manager [13357b660e39] ...
	I1001 12:32:55.178469    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13357b660e39"
	I1001 12:32:55.196519    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:32:55.196536    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:32:55.209070    4242 logs.go:123] Gathering logs for kube-apiserver [fbe4eddea511] ...
	I1001 12:32:55.209082    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe4eddea511"
	I1001 12:32:55.229598    4242 logs.go:123] Gathering logs for coredns [f1ff198f5b54] ...
	I1001 12:32:55.229613    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ff198f5b54"
	I1001 12:32:55.241141    4242 logs.go:123] Gathering logs for kube-proxy [696dee0aa95d] ...
	I1001 12:32:55.241155    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 696dee0aa95d"
	I1001 12:32:55.252922    4242 logs.go:123] Gathering logs for storage-provisioner [9729c1a1e22d] ...
	I1001 12:32:55.252935    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9729c1a1e22d"
	I1001 12:32:55.264195    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:32:55.264207    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:32:55.286302    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:32:55.286309    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:32:55.322106    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:32:55.322122    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:32:55.357852    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:32:55.357866    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:32:55.362177    4242 logs.go:123] Gathering logs for kube-apiserver [c470955dfaae] ...
	I1001 12:32:55.362186    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c470955dfaae"
	I1001 12:32:55.376070    4242 logs.go:123] Gathering logs for etcd [1262a7e4c19e] ...
	I1001 12:32:55.376080    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1262a7e4c19e"
	I1001 12:32:55.390059    4242 logs.go:123] Gathering logs for kube-scheduler [8f22eeb55450] ...
	I1001 12:32:55.390073    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f22eeb55450"
	I1001 12:32:55.408397    4242 logs.go:123] Gathering logs for kube-controller-manager [85f3a613a166] ...
	I1001 12:32:55.408407    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85f3a613a166"
	I1001 12:32:55.446229    4242 logs.go:123] Gathering logs for storage-provisioner [992f91ff2f53] ...
	I1001 12:32:55.446254    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992f91ff2f53"
	I1001 12:32:57.970755    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:32:58.963756    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:32:58.963835    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:33:02.973400    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:33:02.973926    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:33:03.013616    4242 logs.go:276] 2 containers: [c470955dfaae fbe4eddea511]
	I1001 12:33:03.013795    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:33:03.034980    4242 logs.go:276] 2 containers: [5b9e36bfadf5 1262a7e4c19e]
	I1001 12:33:03.035102    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:33:03.050725    4242 logs.go:276] 1 containers: [f1ff198f5b54]
	I1001 12:33:03.050829    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:33:03.062803    4242 logs.go:276] 2 containers: [b0fc6eb4a300 8f22eeb55450]
	I1001 12:33:03.062902    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:33:03.073914    4242 logs.go:276] 1 containers: [696dee0aa95d]
	I1001 12:33:03.074004    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:33:03.088727    4242 logs.go:276] 2 containers: [13357b660e39 85f3a613a166]
	I1001 12:33:03.088837    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:33:03.099144    4242 logs.go:276] 0 containers: []
	W1001 12:33:03.099155    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:33:03.099232    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:33:03.110031    4242 logs.go:276] 2 containers: [992f91ff2f53 9729c1a1e22d]
	I1001 12:33:03.110047    4242 logs.go:123] Gathering logs for kube-apiserver [c470955dfaae] ...
	I1001 12:33:03.110053    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c470955dfaae"
	I1001 12:33:03.124039    4242 logs.go:123] Gathering logs for coredns [f1ff198f5b54] ...
	I1001 12:33:03.124049    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ff198f5b54"
	I1001 12:33:03.135426    4242 logs.go:123] Gathering logs for kube-scheduler [b0fc6eb4a300] ...
	I1001 12:33:03.135438    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fc6eb4a300"
	I1001 12:33:03.146693    4242 logs.go:123] Gathering logs for etcd [1262a7e4c19e] ...
	I1001 12:33:03.146702    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1262a7e4c19e"
	I1001 12:33:03.165130    4242 logs.go:123] Gathering logs for kube-scheduler [8f22eeb55450] ...
	I1001 12:33:03.165142    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f22eeb55450"
	I1001 12:33:03.182340    4242 logs.go:123] Gathering logs for kube-controller-manager [13357b660e39] ...
	I1001 12:33:03.182352    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13357b660e39"
	I1001 12:33:03.200164    4242 logs.go:123] Gathering logs for kube-controller-manager [85f3a613a166] ...
	I1001 12:33:03.200176    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85f3a613a166"
	I1001 12:33:03.212476    4242 logs.go:123] Gathering logs for storage-provisioner [992f91ff2f53] ...
	I1001 12:33:03.212487    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992f91ff2f53"
	I1001 12:33:03.224104    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:33:03.224113    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:33:03.228929    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:33:03.228936    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:33:03.262739    4242 logs.go:123] Gathering logs for kube-proxy [696dee0aa95d] ...
	I1001 12:33:03.262751    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 696dee0aa95d"
	I1001 12:33:03.274450    4242 logs.go:123] Gathering logs for storage-provisioner [9729c1a1e22d] ...
	I1001 12:33:03.274459    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9729c1a1e22d"
	I1001 12:33:03.285955    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:33:03.285968    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:33:03.298667    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:33:03.298678    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:33:03.334293    4242 logs.go:123] Gathering logs for kube-apiserver [fbe4eddea511] ...
	I1001 12:33:03.334305    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe4eddea511"
	I1001 12:33:03.354175    4242 logs.go:123] Gathering logs for etcd [5b9e36bfadf5] ...
	I1001 12:33:03.354190    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b9e36bfadf5"
	I1001 12:33:03.375136    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:33:03.375147    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:33:03.964696    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:33:03.964789    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:33:05.900477    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:33:08.966045    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:33:08.966081    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:33:10.902757    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:33:10.903376    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:33:10.944431    4242 logs.go:276] 2 containers: [c470955dfaae fbe4eddea511]
	I1001 12:33:10.944599    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:33:10.968876    4242 logs.go:276] 2 containers: [5b9e36bfadf5 1262a7e4c19e]
	I1001 12:33:10.969001    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:33:10.983934    4242 logs.go:276] 1 containers: [f1ff198f5b54]
	I1001 12:33:10.984029    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:33:10.996703    4242 logs.go:276] 2 containers: [b0fc6eb4a300 8f22eeb55450]
	I1001 12:33:10.996795    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:33:11.007901    4242 logs.go:276] 1 containers: [696dee0aa95d]
	I1001 12:33:11.007983    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:33:11.018722    4242 logs.go:276] 2 containers: [13357b660e39 85f3a613a166]
	I1001 12:33:11.018806    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:33:11.029267    4242 logs.go:276] 0 containers: []
	W1001 12:33:11.029283    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:33:11.029353    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:33:11.040343    4242 logs.go:276] 2 containers: [992f91ff2f53 9729c1a1e22d]
	I1001 12:33:11.040366    4242 logs.go:123] Gathering logs for kube-apiserver [fbe4eddea511] ...
	I1001 12:33:11.040372    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe4eddea511"
	I1001 12:33:11.059872    4242 logs.go:123] Gathering logs for etcd [1262a7e4c19e] ...
	I1001 12:33:11.059888    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1262a7e4c19e"
	I1001 12:33:11.074664    4242 logs.go:123] Gathering logs for kube-scheduler [b0fc6eb4a300] ...
	I1001 12:33:11.074675    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fc6eb4a300"
	I1001 12:33:11.086562    4242 logs.go:123] Gathering logs for kube-controller-manager [85f3a613a166] ...
	I1001 12:33:11.086571    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85f3a613a166"
	I1001 12:33:11.099094    4242 logs.go:123] Gathering logs for kube-controller-manager [13357b660e39] ...
	I1001 12:33:11.099103    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13357b660e39"
	I1001 12:33:11.118672    4242 logs.go:123] Gathering logs for storage-provisioner [9729c1a1e22d] ...
	I1001 12:33:11.118688    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9729c1a1e22d"
	I1001 12:33:11.140684    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:33:11.140696    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:33:11.145454    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:33:11.145461    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:33:11.180980    4242 logs.go:123] Gathering logs for kube-apiserver [c470955dfaae] ...
	I1001 12:33:11.180993    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c470955dfaae"
	I1001 12:33:11.195628    4242 logs.go:123] Gathering logs for storage-provisioner [992f91ff2f53] ...
	I1001 12:33:11.195638    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992f91ff2f53"
	I1001 12:33:11.207205    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:33:11.207215    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:33:11.219056    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:33:11.219068    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:33:11.241595    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:33:11.241605    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:33:11.276474    4242 logs.go:123] Gathering logs for etcd [5b9e36bfadf5] ...
	I1001 12:33:11.276489    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b9e36bfadf5"
	I1001 12:33:11.290983    4242 logs.go:123] Gathering logs for coredns [f1ff198f5b54] ...
	I1001 12:33:11.290998    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ff198f5b54"
	I1001 12:33:11.302756    4242 logs.go:123] Gathering logs for kube-scheduler [8f22eeb55450] ...
	I1001 12:33:11.302773    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f22eeb55450"
	I1001 12:33:11.321424    4242 logs.go:123] Gathering logs for kube-proxy [696dee0aa95d] ...
	I1001 12:33:11.321435    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 696dee0aa95d"
	I1001 12:33:13.835724    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:33:13.966548    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:33:13.966643    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:33:18.838069    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:33:18.838700    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:33:18.863553    4242 logs.go:276] 2 containers: [c470955dfaae fbe4eddea511]
	I1001 12:33:18.863666    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:33:18.878492    4242 logs.go:276] 2 containers: [5b9e36bfadf5 1262a7e4c19e]
	I1001 12:33:18.878577    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:33:18.890241    4242 logs.go:276] 1 containers: [f1ff198f5b54]
	I1001 12:33:18.890324    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:33:18.900722    4242 logs.go:276] 2 containers: [b0fc6eb4a300 8f22eeb55450]
	I1001 12:33:18.900808    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:33:18.911118    4242 logs.go:276] 1 containers: [696dee0aa95d]
	I1001 12:33:18.911195    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:33:18.921909    4242 logs.go:276] 2 containers: [13357b660e39 85f3a613a166]
	I1001 12:33:18.921980    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:33:18.932116    4242 logs.go:276] 0 containers: []
	W1001 12:33:18.932135    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:33:18.932202    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:33:18.943254    4242 logs.go:276] 2 containers: [992f91ff2f53 9729c1a1e22d]
	I1001 12:33:18.943273    4242 logs.go:123] Gathering logs for kube-scheduler [b0fc6eb4a300] ...
	I1001 12:33:18.943279    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fc6eb4a300"
	I1001 12:33:18.954552    4242 logs.go:123] Gathering logs for storage-provisioner [9729c1a1e22d] ...
	I1001 12:33:18.954562    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9729c1a1e22d"
	I1001 12:33:18.965943    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:33:18.965955    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:33:19.001839    4242 logs.go:123] Gathering logs for kube-apiserver [c470955dfaae] ...
	I1001 12:33:19.001850    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c470955dfaae"
	I1001 12:33:19.015879    4242 logs.go:123] Gathering logs for storage-provisioner [992f91ff2f53] ...
	I1001 12:33:19.015893    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992f91ff2f53"
	I1001 12:33:19.027224    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:33:19.027237    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:33:19.048747    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:33:19.048753    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:33:19.053426    4242 logs.go:123] Gathering logs for kube-scheduler [8f22eeb55450] ...
	I1001 12:33:19.053434    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f22eeb55450"
	I1001 12:33:19.069603    4242 logs.go:123] Gathering logs for kube-proxy [696dee0aa95d] ...
	I1001 12:33:19.069615    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 696dee0aa95d"
	I1001 12:33:19.081493    4242 logs.go:123] Gathering logs for kube-controller-manager [13357b660e39] ...
	I1001 12:33:19.081503    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13357b660e39"
	I1001 12:33:19.098580    4242 logs.go:123] Gathering logs for kube-controller-manager [85f3a613a166] ...
	I1001 12:33:19.098592    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85f3a613a166"
	I1001 12:33:19.111887    4242 logs.go:123] Gathering logs for coredns [f1ff198f5b54] ...
	I1001 12:33:19.111899    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ff198f5b54"
	I1001 12:33:19.127201    4242 logs.go:123] Gathering logs for kube-apiserver [fbe4eddea511] ...
	I1001 12:33:19.127214    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe4eddea511"
	I1001 12:33:19.153836    4242 logs.go:123] Gathering logs for etcd [5b9e36bfadf5] ...
	I1001 12:33:19.153847    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b9e36bfadf5"
	I1001 12:33:19.167962    4242 logs.go:123] Gathering logs for etcd [1262a7e4c19e] ...
	I1001 12:33:19.167972    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1262a7e4c19e"
	I1001 12:33:19.182249    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:33:19.182257    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:33:19.193765    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:33:19.193779    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:33:18.968281    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:33:18.968296    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:33:21.733377    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:33:23.969940    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:33:23.969992    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:33:26.735591    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:33:26.735718    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:33:26.746973    4242 logs.go:276] 2 containers: [c470955dfaae fbe4eddea511]
	I1001 12:33:26.747060    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:33:26.758092    4242 logs.go:276] 2 containers: [5b9e36bfadf5 1262a7e4c19e]
	I1001 12:33:26.758171    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:33:26.768214    4242 logs.go:276] 1 containers: [f1ff198f5b54]
	I1001 12:33:26.768293    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:33:26.779200    4242 logs.go:276] 2 containers: [b0fc6eb4a300 8f22eeb55450]
	I1001 12:33:26.779284    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:33:26.791515    4242 logs.go:276] 1 containers: [696dee0aa95d]
	I1001 12:33:26.791592    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:33:26.801958    4242 logs.go:276] 2 containers: [13357b660e39 85f3a613a166]
	I1001 12:33:26.802039    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:33:26.812012    4242 logs.go:276] 0 containers: []
	W1001 12:33:26.812024    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:33:26.812090    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:33:26.822445    4242 logs.go:276] 2 containers: [992f91ff2f53 9729c1a1e22d]
	I1001 12:33:26.822463    4242 logs.go:123] Gathering logs for kube-scheduler [b0fc6eb4a300] ...
	I1001 12:33:26.822468    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0fc6eb4a300"
	I1001 12:33:26.834711    4242 logs.go:123] Gathering logs for kube-proxy [696dee0aa95d] ...
	I1001 12:33:26.834722    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 696dee0aa95d"
	I1001 12:33:26.848489    4242 logs.go:123] Gathering logs for kube-controller-manager [13357b660e39] ...
	I1001 12:33:26.848500    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13357b660e39"
	I1001 12:33:26.865351    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:33:26.865363    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:33:26.877440    4242 logs.go:123] Gathering logs for coredns [f1ff198f5b54] ...
	I1001 12:33:26.877456    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1ff198f5b54"
	I1001 12:33:26.888618    4242 logs.go:123] Gathering logs for etcd [1262a7e4c19e] ...
	I1001 12:33:26.888638    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1262a7e4c19e"
	I1001 12:33:26.903146    4242 logs.go:123] Gathering logs for kube-scheduler [8f22eeb55450] ...
	I1001 12:33:26.903157    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f22eeb55450"
	I1001 12:33:26.919172    4242 logs.go:123] Gathering logs for kube-controller-manager [85f3a613a166] ...
	I1001 12:33:26.919183    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 85f3a613a166"
	I1001 12:33:26.932425    4242 logs.go:123] Gathering logs for kube-apiserver [fbe4eddea511] ...
	I1001 12:33:26.932436    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbe4eddea511"
	I1001 12:33:26.952005    4242 logs.go:123] Gathering logs for storage-provisioner [992f91ff2f53] ...
	I1001 12:33:26.952018    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 992f91ff2f53"
	I1001 12:33:26.963743    4242 logs.go:123] Gathering logs for etcd [5b9e36bfadf5] ...
	I1001 12:33:26.963758    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b9e36bfadf5"
	I1001 12:33:26.982037    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:33:26.982053    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:33:26.986407    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:33:26.986414    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:33:27.020455    4242 logs.go:123] Gathering logs for kube-apiserver [c470955dfaae] ...
	I1001 12:33:27.020471    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c470955dfaae"
	I1001 12:33:27.035261    4242 logs.go:123] Gathering logs for storage-provisioner [9729c1a1e22d] ...
	I1001 12:33:27.035278    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9729c1a1e22d"
	I1001 12:33:27.046911    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:33:27.046925    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:33:27.069761    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:33:27.069768    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:33:29.609064    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:33:28.972326    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:33:28.972427    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:33:34.610834    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:33:34.610995    4242 kubeadm.go:597] duration metric: took 4m4.020074875s to restartPrimaryControlPlane
	W1001 12:33:34.611149    4242 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1001 12:33:34.611218    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1001 12:33:35.668369    4242 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.0571585s)
	I1001 12:33:35.668464    4242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 12:33:35.673297    4242 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 12:33:35.676055    4242 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 12:33:35.678694    4242 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 12:33:35.678700    4242 kubeadm.go:157] found existing configuration files:
	
	I1001 12:33:35.678731    4242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50292 /etc/kubernetes/admin.conf
	I1001 12:33:35.681608    4242 kubeadm.go:163] "https://control-plane.minikube.internal:50292" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50292 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 12:33:35.681634    4242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 12:33:35.684960    4242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50292 /etc/kubernetes/kubelet.conf
	I1001 12:33:35.687755    4242 kubeadm.go:163] "https://control-plane.minikube.internal:50292" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50292 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 12:33:35.687787    4242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 12:33:35.690499    4242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50292 /etc/kubernetes/controller-manager.conf
	I1001 12:33:35.693626    4242 kubeadm.go:163] "https://control-plane.minikube.internal:50292" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50292 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 12:33:35.693653    4242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 12:33:35.696188    4242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50292 /etc/kubernetes/scheduler.conf
	I1001 12:33:35.698696    4242 kubeadm.go:163] "https://control-plane.minikube.internal:50292" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50292 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 12:33:35.698721    4242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 12:33:35.701702    4242 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1001 12:33:35.718827    4242 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1001 12:33:35.718876    4242 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 12:33:35.769314    4242 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 12:33:35.769367    4242 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 12:33:35.769420    4242 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1001 12:33:35.819657    4242 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 12:33:35.823872    4242 out.go:235]   - Generating certificates and keys ...
	I1001 12:33:35.823947    4242 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 12:33:35.824002    4242 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 12:33:35.824057    4242 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1001 12:33:35.824084    4242 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1001 12:33:35.824119    4242 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1001 12:33:35.824146    4242 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1001 12:33:35.824213    4242 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1001 12:33:35.824348    4242 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1001 12:33:35.824392    4242 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1001 12:33:35.824451    4242 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1001 12:33:35.824479    4242 kubeadm.go:310] [certs] Using the existing "sa" key
	I1001 12:33:35.824517    4242 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 12:33:35.899524    4242 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 12:33:36.083295    4242 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 12:33:36.116235    4242 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 12:33:36.150768    4242 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 12:33:36.180233    4242 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 12:33:36.180623    4242 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 12:33:36.180671    4242 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 12:33:36.251291    4242 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 12:33:33.975061    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:33:33.975108    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:33:36.259257    4242 out.go:235]   - Booting up control plane ...
	I1001 12:33:36.259390    4242 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 12:33:36.259440    4242 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 12:33:36.259505    4242 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 12:33:36.259551    4242 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 12:33:36.259649    4242 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1001 12:33:40.258822    4242 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.002460 seconds
	I1001 12:33:40.258883    4242 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1001 12:33:40.263743    4242 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1001 12:33:40.773731    4242 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1001 12:33:40.773963    4242 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-810000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1001 12:33:41.279592    4242 kubeadm.go:310] [bootstrap-token] Using token: x0io92.vmnxuthgf6zifeig
	I1001 12:33:41.285523    4242 out.go:235]   - Configuring RBAC rules ...
	I1001 12:33:41.285580    4242 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1001 12:33:41.285621    4242 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1001 12:33:41.287692    4242 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1001 12:33:41.290031    4242 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1001 12:33:41.290750    4242 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1001 12:33:41.291571    4242 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1001 12:33:41.294523    4242 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1001 12:33:41.461839    4242 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1001 12:33:41.685194    4242 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1001 12:33:41.685621    4242 kubeadm.go:310] 
	I1001 12:33:41.685650    4242 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1001 12:33:41.685654    4242 kubeadm.go:310] 
	I1001 12:33:41.685697    4242 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1001 12:33:41.685726    4242 kubeadm.go:310] 
	I1001 12:33:41.685740    4242 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1001 12:33:41.685780    4242 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1001 12:33:41.685806    4242 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1001 12:33:41.685809    4242 kubeadm.go:310] 
	I1001 12:33:41.685834    4242 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1001 12:33:41.685836    4242 kubeadm.go:310] 
	I1001 12:33:41.685857    4242 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1001 12:33:41.685861    4242 kubeadm.go:310] 
	I1001 12:33:41.685888    4242 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1001 12:33:41.685929    4242 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1001 12:33:41.685973    4242 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1001 12:33:41.685978    4242 kubeadm.go:310] 
	I1001 12:33:41.686023    4242 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1001 12:33:41.686068    4242 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1001 12:33:41.686072    4242 kubeadm.go:310] 
	I1001 12:33:41.686113    4242 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token x0io92.vmnxuthgf6zifeig \
	I1001 12:33:41.686163    4242 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1bec8634fed302f64212571ed3ed0831b844a21f4f42ed3778332e10a4ff7e9e \
	I1001 12:33:41.686174    4242 kubeadm.go:310] 	--control-plane 
	I1001 12:33:41.686178    4242 kubeadm.go:310] 
	I1001 12:33:41.686228    4242 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1001 12:33:41.686232    4242 kubeadm.go:310] 
	I1001 12:33:41.686275    4242 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token x0io92.vmnxuthgf6zifeig \
	I1001 12:33:41.686342    4242 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1bec8634fed302f64212571ed3ed0831b844a21f4f42ed3778332e10a4ff7e9e 
	I1001 12:33:41.686397    4242 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1001 12:33:41.686413    4242 cni.go:84] Creating CNI manager for ""
	I1001 12:33:41.686421    4242 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 12:33:41.689192    4242 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1001 12:33:41.693269    4242 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1001 12:33:41.696274    4242 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1001 12:33:41.702249    4242 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1001 12:33:41.702314    4242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 12:33:41.702349    4242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-810000 minikube.k8s.io/updated_at=2024_10_01T12_33_41_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4 minikube.k8s.io/name=running-upgrade-810000 minikube.k8s.io/primary=true
	I1001 12:33:41.744489    4242 kubeadm.go:1113] duration metric: took 42.229542ms to wait for elevateKubeSystemPrivileges
	I1001 12:33:41.744494    4242 ops.go:34] apiserver oom_adj: -16
	I1001 12:33:41.744603    4242 kubeadm.go:394] duration metric: took 4m11.167389209s to StartCluster
	I1001 12:33:41.744614    4242 settings.go:142] acquiring lock: {Name:mk456a8b96b1746a679d3a85129b9d4d9b38bdfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 12:33:41.744701    4242 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19736-1073/kubeconfig
	I1001 12:33:41.745077    4242 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19736-1073/kubeconfig: {Name:mkdfe60702c76fe804796a27b08676f2ebb5427f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 12:33:41.745454    4242 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 12:33:41.745458    4242 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1001 12:33:41.745492    4242 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-810000"
	I1001 12:33:41.745500    4242 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-810000"
	W1001 12:33:41.745503    4242 addons.go:243] addon storage-provisioner should already be in state true
	I1001 12:33:41.745516    4242 host.go:66] Checking if "running-upgrade-810000" exists ...
	I1001 12:33:41.745516    4242 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-810000"
	I1001 12:33:41.745559    4242 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-810000"
	I1001 12:33:41.745586    4242 config.go:182] Loaded profile config "running-upgrade-810000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1001 12:33:41.746409    4242 kapi.go:59] client config for running-upgrade-810000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/running-upgrade-810000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/running-upgrade-810000/client.key", CAFile:"/Users/jenkins/minikube-integration/19736-1073/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103f525d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1001 12:33:41.746532    4242 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-810000"
	W1001 12:33:41.746537    4242 addons.go:243] addon default-storageclass should already be in state true
	I1001 12:33:41.746543    4242 host.go:66] Checking if "running-upgrade-810000" exists ...
	I1001 12:33:41.748265    4242 out.go:177] * Verifying Kubernetes components...
	I1001 12:33:41.748571    4242 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1001 12:33:41.752549    4242 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1001 12:33:41.752560    4242 sshutil.go:53] new ssh client: &{IP:localhost Port:50260 SSHKeyPath:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/running-upgrade-810000/id_rsa Username:docker}
	I1001 12:33:41.756149    4242 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 12:33:38.976880    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:33:38.976908    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:33:41.760190    4242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 12:33:41.763183    4242 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 12:33:41.763188    4242 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1001 12:33:41.763194    4242 sshutil.go:53] new ssh client: &{IP:localhost Port:50260 SSHKeyPath:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/running-upgrade-810000/id_rsa Username:docker}
	I1001 12:33:41.837300    4242 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 12:33:41.842722    4242 api_server.go:52] waiting for apiserver process to appear ...
	I1001 12:33:41.842772    4242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 12:33:41.847066    4242 api_server.go:72] duration metric: took 101.604166ms to wait for apiserver process to appear ...
	I1001 12:33:41.847075    4242 api_server.go:88] waiting for apiserver healthz status ...
	I1001 12:33:41.847082    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:33:41.851921    4242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 12:33:41.893566    4242 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1001 12:33:42.200609    4242 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1001 12:33:42.200622    4242 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1001 12:33:43.978557    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:33:43.978600    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:33:46.845705    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:33:46.845812    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:33:48.976009    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:33:48.976455    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:33:49.010273    4721 logs.go:276] 2 containers: [956404de281e bc78f59fb2e5]
	I1001 12:33:49.010456    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:33:49.030201    4721 logs.go:276] 2 containers: [4d0f920ec84f 316e5a1a5aed]
	I1001 12:33:49.030308    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:33:49.045189    4721 logs.go:276] 1 containers: [d04375a2ee30]
	I1001 12:33:49.045287    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:33:49.064740    4721 logs.go:276] 2 containers: [c952b19735c2 7ad38fcc33d6]
	I1001 12:33:49.064835    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:33:49.075751    4721 logs.go:276] 1 containers: [2cdb05dca894]
	I1001 12:33:49.075834    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:33:49.086746    4721 logs.go:276] 2 containers: [ecbe68f7a6b4 d9956cf09477]
	I1001 12:33:49.086831    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:33:49.097388    4721 logs.go:276] 0 containers: []
	W1001 12:33:49.097401    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:33:49.097481    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:33:49.108032    4721 logs.go:276] 1 containers: [5cc1ba08286c]
	I1001 12:33:49.108048    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:33:49.108054    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:33:49.112203    4721 logs.go:123] Gathering logs for etcd [316e5a1a5aed] ...
	I1001 12:33:49.112212    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e5a1a5aed"
	I1001 12:33:49.127085    4721 logs.go:123] Gathering logs for kube-scheduler [c952b19735c2] ...
	I1001 12:33:49.127094    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c952b19735c2"
	I1001 12:33:49.139008    4721 logs.go:123] Gathering logs for kube-controller-manager [d9956cf09477] ...
	I1001 12:33:49.139022    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9956cf09477"
	I1001 12:33:49.151627    4721 logs.go:123] Gathering logs for kube-apiserver [bc78f59fb2e5] ...
	I1001 12:33:49.151638    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc78f59fb2e5"
	I1001 12:33:49.196291    4721 logs.go:123] Gathering logs for etcd [4d0f920ec84f] ...
	I1001 12:33:49.196301    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d0f920ec84f"
	I1001 12:33:49.210211    4721 logs.go:123] Gathering logs for coredns [d04375a2ee30] ...
	I1001 12:33:49.210222    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04375a2ee30"
	I1001 12:33:49.222063    4721 logs.go:123] Gathering logs for kube-scheduler [7ad38fcc33d6] ...
	I1001 12:33:49.222075    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad38fcc33d6"
	I1001 12:33:49.237137    4721 logs.go:123] Gathering logs for kube-controller-manager [ecbe68f7a6b4] ...
	I1001 12:33:49.237148    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecbe68f7a6b4"
	I1001 12:33:49.254161    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:33:49.254170    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:33:49.336181    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:33:49.336192    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:33:49.360840    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:33:49.360847    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:33:49.373091    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:33:49.373103    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:33:49.410430    4721 logs.go:123] Gathering logs for kube-apiserver [956404de281e] ...
	I1001 12:33:49.410441    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 956404de281e"
	I1001 12:33:49.425029    4721 logs.go:123] Gathering logs for kube-proxy [2cdb05dca894] ...
	I1001 12:33:49.425039    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cdb05dca894"
	I1001 12:33:49.436739    4721 logs.go:123] Gathering logs for storage-provisioner [5cc1ba08286c] ...
	I1001 12:33:49.436751    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc1ba08286c"
	I1001 12:33:51.947715    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:33:51.842480    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:33:51.842500    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:33:56.946911    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:33:56.947112    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:33:56.969522    4721 logs.go:276] 2 containers: [956404de281e bc78f59fb2e5]
	I1001 12:33:56.969649    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:33:56.984743    4721 logs.go:276] 2 containers: [4d0f920ec84f 316e5a1a5aed]
	I1001 12:33:56.984844    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:33:56.997373    4721 logs.go:276] 1 containers: [d04375a2ee30]
	I1001 12:33:56.997464    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:33:57.007746    4721 logs.go:276] 2 containers: [c952b19735c2 7ad38fcc33d6]
	I1001 12:33:57.007839    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:33:57.018527    4721 logs.go:276] 1 containers: [2cdb05dca894]
	I1001 12:33:57.018613    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:33:57.028885    4721 logs.go:276] 2 containers: [ecbe68f7a6b4 d9956cf09477]
	I1001 12:33:57.028986    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:33:57.039158    4721 logs.go:276] 0 containers: []
	W1001 12:33:57.039171    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:33:57.039241    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:33:57.049365    4721 logs.go:276] 1 containers: [5cc1ba08286c]
	I1001 12:33:57.049383    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:33:57.049389    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:33:57.087616    4721 logs.go:123] Gathering logs for coredns [d04375a2ee30] ...
	I1001 12:33:57.087628    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04375a2ee30"
	I1001 12:33:57.098633    4721 logs.go:123] Gathering logs for kube-proxy [2cdb05dca894] ...
	I1001 12:33:57.098645    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cdb05dca894"
	I1001 12:33:57.110634    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:33:57.110644    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:33:57.115118    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:33:57.115132    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:33:57.149635    4721 logs.go:123] Gathering logs for etcd [4d0f920ec84f] ...
	I1001 12:33:57.149656    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d0f920ec84f"
	I1001 12:33:57.163894    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:33:57.163908    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:33:57.176182    4721 logs.go:123] Gathering logs for kube-apiserver [956404de281e] ...
	I1001 12:33:57.176196    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 956404de281e"
	I1001 12:33:57.191558    4721 logs.go:123] Gathering logs for storage-provisioner [5cc1ba08286c] ...
	I1001 12:33:57.191575    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc1ba08286c"
	I1001 12:33:57.203337    4721 logs.go:123] Gathering logs for kube-apiserver [bc78f59fb2e5] ...
	I1001 12:33:57.203349    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc78f59fb2e5"
	I1001 12:33:56.839956    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:33:56.840010    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:33:57.243261    4721 logs.go:123] Gathering logs for etcd [316e5a1a5aed] ...
	I1001 12:33:57.243276    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e5a1a5aed"
	I1001 12:33:57.257959    4721 logs.go:123] Gathering logs for kube-scheduler [c952b19735c2] ...
	I1001 12:33:57.257973    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c952b19735c2"
	I1001 12:33:57.270248    4721 logs.go:123] Gathering logs for kube-scheduler [7ad38fcc33d6] ...
	I1001 12:33:57.270264    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad38fcc33d6"
	I1001 12:33:57.285598    4721 logs.go:123] Gathering logs for kube-controller-manager [ecbe68f7a6b4] ...
	I1001 12:33:57.285607    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecbe68f7a6b4"
	I1001 12:33:57.303533    4721 logs.go:123] Gathering logs for kube-controller-manager [d9956cf09477] ...
	I1001 12:33:57.303548    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9956cf09477"
	I1001 12:33:57.315269    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:33:57.315279    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:33:59.840640    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:34:01.838838    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:34:01.838887    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:34:04.841084    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:34:04.841326    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:34:04.857480    4721 logs.go:276] 2 containers: [956404de281e bc78f59fb2e5]
	I1001 12:34:04.857582    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:34:04.870437    4721 logs.go:276] 2 containers: [4d0f920ec84f 316e5a1a5aed]
	I1001 12:34:04.870531    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:34:04.881910    4721 logs.go:276] 1 containers: [d04375a2ee30]
	I1001 12:34:04.881993    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:34:04.892452    4721 logs.go:276] 2 containers: [c952b19735c2 7ad38fcc33d6]
	I1001 12:34:04.892545    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:34:04.902906    4721 logs.go:276] 1 containers: [2cdb05dca894]
	I1001 12:34:04.902990    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:34:04.914020    4721 logs.go:276] 2 containers: [ecbe68f7a6b4 d9956cf09477]
	I1001 12:34:04.914108    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:34:04.924131    4721 logs.go:276] 0 containers: []
	W1001 12:34:04.924143    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:34:04.924215    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:34:04.934246    4721 logs.go:276] 1 containers: [5cc1ba08286c]
	I1001 12:34:04.934265    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:34:04.934271    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:34:04.938912    4721 logs.go:123] Gathering logs for kube-controller-manager [d9956cf09477] ...
	I1001 12:34:04.938921    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9956cf09477"
	I1001 12:34:04.957623    4721 logs.go:123] Gathering logs for storage-provisioner [5cc1ba08286c] ...
	I1001 12:34:04.957639    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc1ba08286c"
	I1001 12:34:04.969549    4721 logs.go:123] Gathering logs for kube-apiserver [956404de281e] ...
	I1001 12:34:04.969559    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 956404de281e"
	I1001 12:34:04.984972    4721 logs.go:123] Gathering logs for kube-apiserver [bc78f59fb2e5] ...
	I1001 12:34:04.984984    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc78f59fb2e5"
	I1001 12:34:05.024230    4721 logs.go:123] Gathering logs for kube-scheduler [c952b19735c2] ...
	I1001 12:34:05.024242    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c952b19735c2"
	I1001 12:34:05.035874    4721 logs.go:123] Gathering logs for kube-proxy [2cdb05dca894] ...
	I1001 12:34:05.035889    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cdb05dca894"
	I1001 12:34:05.047588    4721 logs.go:123] Gathering logs for kube-controller-manager [ecbe68f7a6b4] ...
	I1001 12:34:05.047600    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecbe68f7a6b4"
	I1001 12:34:05.065465    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:34:05.065482    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:34:05.092623    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:34:05.092635    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:34:05.103922    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:34:05.103935    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:34:05.141543    4721 logs.go:123] Gathering logs for etcd [316e5a1a5aed] ...
	I1001 12:34:05.141554    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e5a1a5aed"
	I1001 12:34:05.158182    4721 logs.go:123] Gathering logs for coredns [d04375a2ee30] ...
	I1001 12:34:05.158192    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04375a2ee30"
	I1001 12:34:05.169309    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:34:05.169319    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:34:05.207861    4721 logs.go:123] Gathering logs for etcd [4d0f920ec84f] ...
	I1001 12:34:05.207873    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d0f920ec84f"
	I1001 12:34:05.221820    4721 logs.go:123] Gathering logs for kube-scheduler [7ad38fcc33d6] ...
	I1001 12:34:05.221834    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad38fcc33d6"
	I1001 12:34:06.838296    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:34:06.838345    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:34:11.838470    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:34:11.838518    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1001 12:34:12.185695    4242 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1001 12:34:12.190209    4242 out.go:177] * Enabled addons: storage-provisioner
	I1001 12:34:07.737689    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:34:12.199076    4242 addons.go:510] duration metric: took 30.469652042s for enable addons: enabled=[storage-provisioner]
	I1001 12:34:12.738784    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:34:12.738982    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:34:12.754628    4721 logs.go:276] 2 containers: [956404de281e bc78f59fb2e5]
	I1001 12:34:12.754717    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:34:12.765445    4721 logs.go:276] 2 containers: [4d0f920ec84f 316e5a1a5aed]
	I1001 12:34:12.765534    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:34:12.776164    4721 logs.go:276] 1 containers: [d04375a2ee30]
	I1001 12:34:12.776240    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:34:12.787422    4721 logs.go:276] 2 containers: [c952b19735c2 7ad38fcc33d6]
	I1001 12:34:12.787515    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:34:12.797987    4721 logs.go:276] 1 containers: [2cdb05dca894]
	I1001 12:34:12.798076    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:34:12.808264    4721 logs.go:276] 2 containers: [ecbe68f7a6b4 d9956cf09477]
	I1001 12:34:12.808343    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:34:12.818383    4721 logs.go:276] 0 containers: []
	W1001 12:34:12.818396    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:34:12.818469    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:34:12.828516    4721 logs.go:276] 1 containers: [5cc1ba08286c]
	I1001 12:34:12.828535    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:34:12.828540    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:34:12.854058    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:34:12.854064    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:34:12.889506    4721 logs.go:123] Gathering logs for etcd [4d0f920ec84f] ...
	I1001 12:34:12.889517    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d0f920ec84f"
	I1001 12:34:12.904385    4721 logs.go:123] Gathering logs for kube-controller-manager [ecbe68f7a6b4] ...
	I1001 12:34:12.904395    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecbe68f7a6b4"
	I1001 12:34:12.921449    4721 logs.go:123] Gathering logs for coredns [d04375a2ee30] ...
	I1001 12:34:12.921459    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04375a2ee30"
	I1001 12:34:12.932968    4721 logs.go:123] Gathering logs for kube-scheduler [c952b19735c2] ...
	I1001 12:34:12.932978    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c952b19735c2"
	I1001 12:34:12.945502    4721 logs.go:123] Gathering logs for kube-apiserver [bc78f59fb2e5] ...
	I1001 12:34:12.945516    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc78f59fb2e5"
	I1001 12:34:12.986358    4721 logs.go:123] Gathering logs for kube-scheduler [7ad38fcc33d6] ...
	I1001 12:34:12.986371    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad38fcc33d6"
	I1001 12:34:13.005421    4721 logs.go:123] Gathering logs for kube-controller-manager [d9956cf09477] ...
	I1001 12:34:13.005434    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9956cf09477"
	I1001 12:34:13.017464    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:34:13.017479    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:34:13.029363    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:34:13.029374    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:34:13.065520    4721 logs.go:123] Gathering logs for kube-apiserver [956404de281e] ...
	I1001 12:34:13.065529    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 956404de281e"
	I1001 12:34:13.084876    4721 logs.go:123] Gathering logs for kube-proxy [2cdb05dca894] ...
	I1001 12:34:13.084886    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cdb05dca894"
	I1001 12:34:13.096326    4721 logs.go:123] Gathering logs for storage-provisioner [5cc1ba08286c] ...
	I1001 12:34:13.096337    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc1ba08286c"
	I1001 12:34:13.114564    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:34:13.114579    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:34:13.118603    4721 logs.go:123] Gathering logs for etcd [316e5a1a5aed] ...
	I1001 12:34:13.118611    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e5a1a5aed"
	I1001 12:34:15.635440    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:34:16.839266    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:34:16.839309    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:34:20.637108    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:34:20.637609    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:34:20.673268    4721 logs.go:276] 2 containers: [956404de281e bc78f59fb2e5]
	I1001 12:34:20.673490    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:34:20.694833    4721 logs.go:276] 2 containers: [4d0f920ec84f 316e5a1a5aed]
	I1001 12:34:20.694952    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:34:20.709840    4721 logs.go:276] 1 containers: [d04375a2ee30]
	I1001 12:34:20.709938    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:34:20.722922    4721 logs.go:276] 2 containers: [c952b19735c2 7ad38fcc33d6]
	I1001 12:34:20.723014    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:34:20.733680    4721 logs.go:276] 1 containers: [2cdb05dca894]
	I1001 12:34:20.733765    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:34:20.752853    4721 logs.go:276] 2 containers: [ecbe68f7a6b4 d9956cf09477]
	I1001 12:34:20.752942    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:34:20.763457    4721 logs.go:276] 0 containers: []
	W1001 12:34:20.763474    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:34:20.763546    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:34:20.774449    4721 logs.go:276] 1 containers: [5cc1ba08286c]
	I1001 12:34:20.774472    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:34:20.774477    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:34:20.811779    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:34:20.811788    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:34:20.815797    4721 logs.go:123] Gathering logs for kube-scheduler [7ad38fcc33d6] ...
	I1001 12:34:20.815805    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad38fcc33d6"
	I1001 12:34:20.830777    4721 logs.go:123] Gathering logs for kube-proxy [2cdb05dca894] ...
	I1001 12:34:20.830789    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cdb05dca894"
	I1001 12:34:20.842477    4721 logs.go:123] Gathering logs for kube-controller-manager [ecbe68f7a6b4] ...
	I1001 12:34:20.842489    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecbe68f7a6b4"
	I1001 12:34:20.860147    4721 logs.go:123] Gathering logs for etcd [4d0f920ec84f] ...
	I1001 12:34:20.860158    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d0f920ec84f"
	I1001 12:34:20.874038    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:34:20.874048    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:34:20.898191    4721 logs.go:123] Gathering logs for kube-apiserver [956404de281e] ...
	I1001 12:34:20.898200    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 956404de281e"
	I1001 12:34:20.912274    4721 logs.go:123] Gathering logs for etcd [316e5a1a5aed] ...
	I1001 12:34:20.912285    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e5a1a5aed"
	I1001 12:34:20.927295    4721 logs.go:123] Gathering logs for kube-scheduler [c952b19735c2] ...
	I1001 12:34:20.927306    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c952b19735c2"
	I1001 12:34:20.938602    4721 logs.go:123] Gathering logs for kube-controller-manager [d9956cf09477] ...
	I1001 12:34:20.938618    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9956cf09477"
	I1001 12:34:20.950733    4721 logs.go:123] Gathering logs for storage-provisioner [5cc1ba08286c] ...
	I1001 12:34:20.950743    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc1ba08286c"
	I1001 12:34:20.961905    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:34:20.961920    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:34:20.999104    4721 logs.go:123] Gathering logs for kube-apiserver [bc78f59fb2e5] ...
	I1001 12:34:20.999116    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc78f59fb2e5"
	I1001 12:34:21.037653    4721 logs.go:123] Gathering logs for coredns [d04375a2ee30] ...
	I1001 12:34:21.037665    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04375a2ee30"
	I1001 12:34:21.048855    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:34:21.048871    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:34:21.840655    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:34:21.840697    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:34:23.563072    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:34:26.842357    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:34:26.842382    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:34:28.564939    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:34:28.565234    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:34:28.591766    4721 logs.go:276] 2 containers: [956404de281e bc78f59fb2e5]
	I1001 12:34:28.591923    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:34:28.609188    4721 logs.go:276] 2 containers: [4d0f920ec84f 316e5a1a5aed]
	I1001 12:34:28.609277    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:34:28.622228    4721 logs.go:276] 1 containers: [d04375a2ee30]
	I1001 12:34:28.622332    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:34:28.633955    4721 logs.go:276] 2 containers: [c952b19735c2 7ad38fcc33d6]
	I1001 12:34:28.634036    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:34:28.644569    4721 logs.go:276] 1 containers: [2cdb05dca894]
	I1001 12:34:28.644642    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:34:28.658945    4721 logs.go:276] 2 containers: [ecbe68f7a6b4 d9956cf09477]
	I1001 12:34:28.659020    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:34:28.669393    4721 logs.go:276] 0 containers: []
	W1001 12:34:28.669407    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:34:28.669476    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:34:28.685243    4721 logs.go:276] 1 containers: [5cc1ba08286c]
	I1001 12:34:28.685261    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:34:28.685267    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:34:28.689951    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:34:28.689962    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:34:28.725055    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:34:28.725066    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:34:28.737424    4721 logs.go:123] Gathering logs for kube-apiserver [956404de281e] ...
	I1001 12:34:28.737435    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 956404de281e"
	I1001 12:34:28.756841    4721 logs.go:123] Gathering logs for kube-apiserver [bc78f59fb2e5] ...
	I1001 12:34:28.756852    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc78f59fb2e5"
	I1001 12:34:28.795034    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:34:28.795045    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:34:28.832230    4721 logs.go:123] Gathering logs for etcd [316e5a1a5aed] ...
	I1001 12:34:28.832244    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e5a1a5aed"
	I1001 12:34:28.846710    4721 logs.go:123] Gathering logs for kube-scheduler [c952b19735c2] ...
	I1001 12:34:28.846722    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c952b19735c2"
	I1001 12:34:28.858603    4721 logs.go:123] Gathering logs for kube-controller-manager [ecbe68f7a6b4] ...
	I1001 12:34:28.858613    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecbe68f7a6b4"
	I1001 12:34:28.876585    4721 logs.go:123] Gathering logs for kube-controller-manager [d9956cf09477] ...
	I1001 12:34:28.876595    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9956cf09477"
	I1001 12:34:28.888473    4721 logs.go:123] Gathering logs for storage-provisioner [5cc1ba08286c] ...
	I1001 12:34:28.888483    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc1ba08286c"
	I1001 12:34:28.903753    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:34:28.903763    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:34:28.927141    4721 logs.go:123] Gathering logs for etcd [4d0f920ec84f] ...
	I1001 12:34:28.927149    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d0f920ec84f"
	I1001 12:34:28.940613    4721 logs.go:123] Gathering logs for coredns [d04375a2ee30] ...
	I1001 12:34:28.940627    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04375a2ee30"
	I1001 12:34:28.951664    4721 logs.go:123] Gathering logs for kube-scheduler [7ad38fcc33d6] ...
	I1001 12:34:28.951676    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad38fcc33d6"
	I1001 12:34:28.966193    4721 logs.go:123] Gathering logs for kube-proxy [2cdb05dca894] ...
	I1001 12:34:28.966203    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cdb05dca894"
	I1001 12:34:31.484681    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:34:31.843849    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:34:31.843899    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:34:36.486768    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:34:36.487292    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:34:36.525019    4721 logs.go:276] 2 containers: [956404de281e bc78f59fb2e5]
	I1001 12:34:36.525185    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:34:36.544001    4721 logs.go:276] 2 containers: [4d0f920ec84f 316e5a1a5aed]
	I1001 12:34:36.544129    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:34:36.557991    4721 logs.go:276] 1 containers: [d04375a2ee30]
	I1001 12:34:36.558091    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:34:36.570349    4721 logs.go:276] 2 containers: [c952b19735c2 7ad38fcc33d6]
	I1001 12:34:36.570440    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:34:36.580945    4721 logs.go:276] 1 containers: [2cdb05dca894]
	I1001 12:34:36.581028    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:34:36.597638    4721 logs.go:276] 2 containers: [ecbe68f7a6b4 d9956cf09477]
	I1001 12:34:36.597717    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:34:36.608312    4721 logs.go:276] 0 containers: []
	W1001 12:34:36.608325    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:34:36.608392    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:34:36.620245    4721 logs.go:276] 1 containers: [5cc1ba08286c]
	I1001 12:34:36.620264    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:34:36.620269    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:34:36.657851    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:34:36.657866    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:34:36.696894    4721 logs.go:123] Gathering logs for kube-apiserver [956404de281e] ...
	I1001 12:34:36.696909    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 956404de281e"
	I1001 12:34:36.722223    4721 logs.go:123] Gathering logs for coredns [d04375a2ee30] ...
	I1001 12:34:36.722234    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04375a2ee30"
	I1001 12:34:36.741242    4721 logs.go:123] Gathering logs for kube-proxy [2cdb05dca894] ...
	I1001 12:34:36.741253    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cdb05dca894"
	I1001 12:34:36.753552    4721 logs.go:123] Gathering logs for kube-controller-manager [d9956cf09477] ...
	I1001 12:34:36.753568    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9956cf09477"
	I1001 12:34:36.766637    4721 logs.go:123] Gathering logs for storage-provisioner [5cc1ba08286c] ...
	I1001 12:34:36.766651    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc1ba08286c"
	I1001 12:34:36.778455    4721 logs.go:123] Gathering logs for kube-apiserver [bc78f59fb2e5] ...
	I1001 12:34:36.778466    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc78f59fb2e5"
	I1001 12:34:36.816354    4721 logs.go:123] Gathering logs for etcd [4d0f920ec84f] ...
	I1001 12:34:36.816365    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d0f920ec84f"
	I1001 12:34:36.830438    4721 logs.go:123] Gathering logs for kube-scheduler [c952b19735c2] ...
	I1001 12:34:36.830479    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c952b19735c2"
	I1001 12:34:36.841882    4721 logs.go:123] Gathering logs for kube-scheduler [7ad38fcc33d6] ...
	I1001 12:34:36.841892    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad38fcc33d6"
	I1001 12:34:36.856557    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:34:36.856572    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:34:36.868814    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:34:36.868827    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:34:36.873395    4721 logs.go:123] Gathering logs for etcd [316e5a1a5aed] ...
	I1001 12:34:36.873404    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e5a1a5aed"
	I1001 12:34:36.888659    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:34:36.888673    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:34:36.914058    4721 logs.go:123] Gathering logs for kube-controller-manager [ecbe68f7a6b4] ...
	I1001 12:34:36.914066    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecbe68f7a6b4"
	I1001 12:34:36.844994    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:34:36.845016    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:34:39.433564    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:34:41.846904    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:34:41.847050    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:34:41.860497    4242 logs.go:276] 1 containers: [b4b0ba48f60b]
	I1001 12:34:41.860592    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:34:41.873617    4242 logs.go:276] 1 containers: [4fffcaa9e400]
	I1001 12:34:41.873698    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:34:41.885101    4242 logs.go:276] 2 containers: [5e5e58a930ac c3764113e7e4]
	I1001 12:34:41.885185    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:34:41.910018    4242 logs.go:276] 1 containers: [3430a5479e9c]
	I1001 12:34:41.910110    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:34:41.937331    4242 logs.go:276] 1 containers: [ae0380eb6ceb]
	I1001 12:34:41.937416    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:34:41.948109    4242 logs.go:276] 1 containers: [38b93891ecd6]
	I1001 12:34:41.948184    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:34:41.958609    4242 logs.go:276] 0 containers: []
	W1001 12:34:41.958622    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:34:41.958684    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:34:41.969409    4242 logs.go:276] 1 containers: [97631f54aa43]
	I1001 12:34:41.969425    4242 logs.go:123] Gathering logs for kube-controller-manager [38b93891ecd6] ...
	I1001 12:34:41.969432    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38b93891ecd6"
	I1001 12:34:41.994363    4242 logs.go:123] Gathering logs for storage-provisioner [97631f54aa43] ...
	I1001 12:34:41.994379    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97631f54aa43"
	I1001 12:34:42.006466    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:34:42.006476    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:34:42.030780    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:34:42.030787    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:34:42.066417    4242 logs.go:123] Gathering logs for kube-apiserver [b4b0ba48f60b] ...
	I1001 12:34:42.066425    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b0ba48f60b"
	I1001 12:34:42.081402    4242 logs.go:123] Gathering logs for kube-proxy [ae0380eb6ceb] ...
	I1001 12:34:42.081413    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae0380eb6ceb"
	I1001 12:34:42.093626    4242 logs.go:123] Gathering logs for coredns [5e5e58a930ac] ...
	I1001 12:34:42.093639    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5e58a930ac"
	I1001 12:34:42.104878    4242 logs.go:123] Gathering logs for coredns [c3764113e7e4] ...
	I1001 12:34:42.104889    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3764113e7e4"
	I1001 12:34:42.116822    4242 logs.go:123] Gathering logs for kube-scheduler [3430a5479e9c] ...
	I1001 12:34:42.116831    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3430a5479e9c"
	I1001 12:34:42.132713    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:34:42.132725    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:34:42.144793    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:34:42.144805    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:34:42.149686    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:34:42.149693    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:34:42.225505    4242 logs.go:123] Gathering logs for etcd [4fffcaa9e400] ...
	I1001 12:34:42.225521    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fffcaa9e400"
	I1001 12:34:44.433826    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:34:44.433988    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:34:44.445830    4721 logs.go:276] 2 containers: [956404de281e bc78f59fb2e5]
	I1001 12:34:44.445916    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:34:44.456787    4721 logs.go:276] 2 containers: [4d0f920ec84f 316e5a1a5aed]
	I1001 12:34:44.456870    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:34:44.469204    4721 logs.go:276] 1 containers: [d04375a2ee30]
	I1001 12:34:44.469293    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:34:44.479321    4721 logs.go:276] 2 containers: [c952b19735c2 7ad38fcc33d6]
	I1001 12:34:44.479411    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:34:44.489397    4721 logs.go:276] 1 containers: [2cdb05dca894]
	I1001 12:34:44.489482    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:34:44.499912    4721 logs.go:276] 2 containers: [ecbe68f7a6b4 d9956cf09477]
	I1001 12:34:44.499994    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:34:44.510074    4721 logs.go:276] 0 containers: []
	W1001 12:34:44.510087    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:34:44.510162    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:34:44.520703    4721 logs.go:276] 1 containers: [5cc1ba08286c]
	I1001 12:34:44.520722    4721 logs.go:123] Gathering logs for storage-provisioner [5cc1ba08286c] ...
	I1001 12:34:44.520728    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc1ba08286c"
	I1001 12:34:44.532387    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:34:44.532399    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:34:44.544229    4721 logs.go:123] Gathering logs for kube-apiserver [bc78f59fb2e5] ...
	I1001 12:34:44.544240    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc78f59fb2e5"
	I1001 12:34:44.581795    4721 logs.go:123] Gathering logs for etcd [316e5a1a5aed] ...
	I1001 12:34:44.581806    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e5a1a5aed"
	I1001 12:34:44.595943    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:34:44.595954    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:34:44.633667    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:34:44.633677    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:34:44.668961    4721 logs.go:123] Gathering logs for etcd [4d0f920ec84f] ...
	I1001 12:34:44.668973    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d0f920ec84f"
	I1001 12:34:44.682894    4721 logs.go:123] Gathering logs for coredns [d04375a2ee30] ...
	I1001 12:34:44.682903    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04375a2ee30"
	I1001 12:34:44.693898    4721 logs.go:123] Gathering logs for kube-scheduler [c952b19735c2] ...
	I1001 12:34:44.693910    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c952b19735c2"
	I1001 12:34:44.707065    4721 logs.go:123] Gathering logs for kube-proxy [2cdb05dca894] ...
	I1001 12:34:44.707077    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cdb05dca894"
	I1001 12:34:44.719761    4721 logs.go:123] Gathering logs for kube-controller-manager [d9956cf09477] ...
	I1001 12:34:44.719773    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9956cf09477"
	I1001 12:34:44.732499    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:34:44.732508    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:34:44.736879    4721 logs.go:123] Gathering logs for kube-scheduler [7ad38fcc33d6] ...
	I1001 12:34:44.736886    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad38fcc33d6"
	I1001 12:34:44.751723    4721 logs.go:123] Gathering logs for kube-controller-manager [ecbe68f7a6b4] ...
	I1001 12:34:44.751737    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecbe68f7a6b4"
	I1001 12:34:44.769467    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:34:44.769476    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:34:44.794167    4721 logs.go:123] Gathering logs for kube-apiserver [956404de281e] ...
	I1001 12:34:44.794181    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 956404de281e"
	I1001 12:34:44.739905    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:34:47.311287    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:34:49.741860    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:34:49.742081    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:34:49.755488    4242 logs.go:276] 1 containers: [b4b0ba48f60b]
	I1001 12:34:49.755582    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:34:49.767237    4242 logs.go:276] 1 containers: [4fffcaa9e400]
	I1001 12:34:49.767327    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:34:49.777767    4242 logs.go:276] 2 containers: [5e5e58a930ac c3764113e7e4]
	I1001 12:34:49.777856    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:34:49.788537    4242 logs.go:276] 1 containers: [3430a5479e9c]
	I1001 12:34:49.788627    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:34:49.803044    4242 logs.go:276] 1 containers: [ae0380eb6ceb]
	I1001 12:34:49.803132    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:34:49.814556    4242 logs.go:276] 1 containers: [38b93891ecd6]
	I1001 12:34:49.814642    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:34:49.824982    4242 logs.go:276] 0 containers: []
	W1001 12:34:49.824998    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:34:49.825074    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:34:49.835198    4242 logs.go:276] 1 containers: [97631f54aa43]
	I1001 12:34:49.835216    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:34:49.835223    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:34:49.846635    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:34:49.846646    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:34:49.882004    4242 logs.go:123] Gathering logs for etcd [4fffcaa9e400] ...
	I1001 12:34:49.882019    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fffcaa9e400"
	I1001 12:34:49.896360    4242 logs.go:123] Gathering logs for kube-scheduler [3430a5479e9c] ...
	I1001 12:34:49.896373    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3430a5479e9c"
	I1001 12:34:49.911970    4242 logs.go:123] Gathering logs for kube-controller-manager [38b93891ecd6] ...
	I1001 12:34:49.911982    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38b93891ecd6"
	I1001 12:34:49.930642    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:34:49.930652    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:34:49.953504    4242 logs.go:123] Gathering logs for kube-proxy [ae0380eb6ceb] ...
	I1001 12:34:49.953511    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae0380eb6ceb"
	I1001 12:34:49.965421    4242 logs.go:123] Gathering logs for storage-provisioner [97631f54aa43] ...
	I1001 12:34:49.965433    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97631f54aa43"
	I1001 12:34:49.981382    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:34:49.981394    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:34:50.016209    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:34:50.016215    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:34:50.021011    4242 logs.go:123] Gathering logs for kube-apiserver [b4b0ba48f60b] ...
	I1001 12:34:50.021021    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b0ba48f60b"
	I1001 12:34:50.035150    4242 logs.go:123] Gathering logs for coredns [5e5e58a930ac] ...
	I1001 12:34:50.035161    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5e58a930ac"
	I1001 12:34:50.046330    4242 logs.go:123] Gathering logs for coredns [c3764113e7e4] ...
	I1001 12:34:50.046340    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3764113e7e4"
	I1001 12:34:52.558070    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:34:52.313707    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:34:52.313911    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:34:52.327920    4721 logs.go:276] 2 containers: [956404de281e bc78f59fb2e5]
	I1001 12:34:52.328023    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:34:52.345898    4721 logs.go:276] 2 containers: [4d0f920ec84f 316e5a1a5aed]
	I1001 12:34:52.345973    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:34:52.356825    4721 logs.go:276] 1 containers: [d04375a2ee30]
	I1001 12:34:52.356898    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:34:52.368024    4721 logs.go:276] 2 containers: [c952b19735c2 7ad38fcc33d6]
	I1001 12:34:52.368108    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:34:52.378681    4721 logs.go:276] 1 containers: [2cdb05dca894]
	I1001 12:34:52.378769    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:34:52.389901    4721 logs.go:276] 2 containers: [ecbe68f7a6b4 d9956cf09477]
	I1001 12:34:52.389991    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:34:52.404922    4721 logs.go:276] 0 containers: []
	W1001 12:34:52.404936    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:34:52.405007    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:34:52.415857    4721 logs.go:276] 1 containers: [5cc1ba08286c]
	I1001 12:34:52.415873    4721 logs.go:123] Gathering logs for etcd [4d0f920ec84f] ...
	I1001 12:34:52.415879    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d0f920ec84f"
	I1001 12:34:52.429384    4721 logs.go:123] Gathering logs for etcd [316e5a1a5aed] ...
	I1001 12:34:52.429394    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e5a1a5aed"
	I1001 12:34:52.444408    4721 logs.go:123] Gathering logs for kube-proxy [2cdb05dca894] ...
	I1001 12:34:52.444418    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cdb05dca894"
	I1001 12:34:52.455977    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:34:52.455990    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:34:52.467626    4721 logs.go:123] Gathering logs for kube-apiserver [956404de281e] ...
	I1001 12:34:52.467638    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 956404de281e"
	I1001 12:34:52.481891    4721 logs.go:123] Gathering logs for kube-scheduler [7ad38fcc33d6] ...
	I1001 12:34:52.481902    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad38fcc33d6"
	I1001 12:34:52.496429    4721 logs.go:123] Gathering logs for kube-controller-manager [ecbe68f7a6b4] ...
	I1001 12:34:52.496444    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecbe68f7a6b4"
	I1001 12:34:52.514994    4721 logs.go:123] Gathering logs for kube-controller-manager [d9956cf09477] ...
	I1001 12:34:52.515011    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9956cf09477"
	I1001 12:34:52.529032    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:34:52.529043    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:34:52.553615    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:34:52.553624    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:34:52.557863    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:34:52.557868    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:34:52.592769    4721 logs.go:123] Gathering logs for kube-apiserver [bc78f59fb2e5] ...
	I1001 12:34:52.592781    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc78f59fb2e5"
	I1001 12:34:52.630706    4721 logs.go:123] Gathering logs for coredns [d04375a2ee30] ...
	I1001 12:34:52.630724    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04375a2ee30"
	I1001 12:34:52.643168    4721 logs.go:123] Gathering logs for kube-scheduler [c952b19735c2] ...
	I1001 12:34:52.643181    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c952b19735c2"
	I1001 12:34:52.659572    4721 logs.go:123] Gathering logs for storage-provisioner [5cc1ba08286c] ...
	I1001 12:34:52.659588    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc1ba08286c"
	I1001 12:34:52.672133    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:34:52.672150    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:34:55.213271    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:34:57.560225    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:34:57.560703    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:34:57.595335    4242 logs.go:276] 1 containers: [b4b0ba48f60b]
	I1001 12:34:57.595507    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:34:57.615255    4242 logs.go:276] 1 containers: [4fffcaa9e400]
	I1001 12:34:57.615373    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:34:57.631256    4242 logs.go:276] 2 containers: [5e5e58a930ac c3764113e7e4]
	I1001 12:34:57.631333    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:34:57.643465    4242 logs.go:276] 1 containers: [3430a5479e9c]
	I1001 12:34:57.643556    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:34:57.658616    4242 logs.go:276] 1 containers: [ae0380eb6ceb]
	I1001 12:34:57.658694    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:34:57.669272    4242 logs.go:276] 1 containers: [38b93891ecd6]
	I1001 12:34:57.669362    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:34:57.679800    4242 logs.go:276] 0 containers: []
	W1001 12:34:57.679812    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:34:57.679889    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:34:57.689987    4242 logs.go:276] 1 containers: [97631f54aa43]
	I1001 12:34:57.690002    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:34:57.690007    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:34:57.727562    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:34:57.727570    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:34:57.764445    4242 logs.go:123] Gathering logs for kube-apiserver [b4b0ba48f60b] ...
	I1001 12:34:57.764456    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b0ba48f60b"
	I1001 12:34:57.778603    4242 logs.go:123] Gathering logs for etcd [4fffcaa9e400] ...
	I1001 12:34:57.778614    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fffcaa9e400"
	I1001 12:34:57.804860    4242 logs.go:123] Gathering logs for coredns [5e5e58a930ac] ...
	I1001 12:34:57.804872    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5e58a930ac"
	I1001 12:34:57.817413    4242 logs.go:123] Gathering logs for kube-controller-manager [38b93891ecd6] ...
	I1001 12:34:57.817429    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38b93891ecd6"
	I1001 12:34:57.834868    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:34:57.834881    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:34:57.847138    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:34:57.847155    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:34:57.851507    4242 logs.go:123] Gathering logs for coredns [c3764113e7e4] ...
	I1001 12:34:57.851515    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3764113e7e4"
	I1001 12:34:57.862926    4242 logs.go:123] Gathering logs for kube-scheduler [3430a5479e9c] ...
	I1001 12:34:57.862961    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3430a5479e9c"
	I1001 12:34:57.878949    4242 logs.go:123] Gathering logs for kube-proxy [ae0380eb6ceb] ...
	I1001 12:34:57.878965    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae0380eb6ceb"
	I1001 12:34:57.890316    4242 logs.go:123] Gathering logs for storage-provisioner [97631f54aa43] ...
	I1001 12:34:57.890326    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97631f54aa43"
	I1001 12:34:57.901460    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:34:57.901470    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:35:00.214650    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:35:00.214931    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:35:00.232934    4721 logs.go:276] 2 containers: [956404de281e bc78f59fb2e5]
	I1001 12:35:00.233045    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:35:00.247110    4721 logs.go:276] 2 containers: [4d0f920ec84f 316e5a1a5aed]
	I1001 12:35:00.247195    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:35:00.259003    4721 logs.go:276] 1 containers: [d04375a2ee30]
	I1001 12:35:00.259083    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:35:00.269396    4721 logs.go:276] 2 containers: [c952b19735c2 7ad38fcc33d6]
	I1001 12:35:00.269481    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:35:00.279783    4721 logs.go:276] 1 containers: [2cdb05dca894]
	I1001 12:35:00.279854    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:35:00.290356    4721 logs.go:276] 2 containers: [ecbe68f7a6b4 d9956cf09477]
	I1001 12:35:00.290443    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:35:00.301689    4721 logs.go:276] 0 containers: []
	W1001 12:35:00.301706    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:35:00.301779    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:35:00.311997    4721 logs.go:276] 1 containers: [5cc1ba08286c]
	I1001 12:35:00.312015    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:35:00.312020    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:35:00.316656    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:35:00.316664    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:35:00.351081    4721 logs.go:123] Gathering logs for kube-apiserver [bc78f59fb2e5] ...
	I1001 12:35:00.351097    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc78f59fb2e5"
	I1001 12:35:00.388905    4721 logs.go:123] Gathering logs for kube-scheduler [c952b19735c2] ...
	I1001 12:35:00.388918    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c952b19735c2"
	I1001 12:35:00.401141    4721 logs.go:123] Gathering logs for kube-controller-manager [ecbe68f7a6b4] ...
	I1001 12:35:00.401153    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecbe68f7a6b4"
	I1001 12:35:00.418123    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:35:00.418147    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:35:00.455880    4721 logs.go:123] Gathering logs for kube-controller-manager [d9956cf09477] ...
	I1001 12:35:00.455888    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9956cf09477"
	I1001 12:35:00.468106    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:35:00.468118    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:35:00.487574    4721 logs.go:123] Gathering logs for kube-apiserver [956404de281e] ...
	I1001 12:35:00.487587    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 956404de281e"
	I1001 12:35:00.501467    4721 logs.go:123] Gathering logs for etcd [4d0f920ec84f] ...
	I1001 12:35:00.501476    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d0f920ec84f"
	I1001 12:35:00.520881    4721 logs.go:123] Gathering logs for etcd [316e5a1a5aed] ...
	I1001 12:35:00.520893    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e5a1a5aed"
	I1001 12:35:00.535389    4721 logs.go:123] Gathering logs for coredns [d04375a2ee30] ...
	I1001 12:35:00.535400    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04375a2ee30"
	I1001 12:35:00.546557    4721 logs.go:123] Gathering logs for kube-scheduler [7ad38fcc33d6] ...
	I1001 12:35:00.546570    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad38fcc33d6"
	I1001 12:35:00.561042    4721 logs.go:123] Gathering logs for kube-proxy [2cdb05dca894] ...
	I1001 12:35:00.561058    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cdb05dca894"
	I1001 12:35:00.575395    4721 logs.go:123] Gathering logs for storage-provisioner [5cc1ba08286c] ...
	I1001 12:35:00.575405    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc1ba08286c"
	I1001 12:35:00.586633    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:35:00.586643    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:35:00.425867    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:35:03.111410    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:35:05.427927    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:35:05.428290    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:35:05.454372    4242 logs.go:276] 1 containers: [b4b0ba48f60b]
	I1001 12:35:05.454525    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:35:05.472862    4242 logs.go:276] 1 containers: [4fffcaa9e400]
	I1001 12:35:05.472958    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:35:05.486488    4242 logs.go:276] 2 containers: [5e5e58a930ac c3764113e7e4]
	I1001 12:35:05.486585    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:35:05.497661    4242 logs.go:276] 1 containers: [3430a5479e9c]
	I1001 12:35:05.497743    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:35:05.508493    4242 logs.go:276] 1 containers: [ae0380eb6ceb]
	I1001 12:35:05.508578    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:35:05.518869    4242 logs.go:276] 1 containers: [38b93891ecd6]
	I1001 12:35:05.518946    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:35:05.530629    4242 logs.go:276] 0 containers: []
	W1001 12:35:05.530641    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:35:05.530706    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:35:05.540645    4242 logs.go:276] 1 containers: [97631f54aa43]
	I1001 12:35:05.540661    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:35:05.540666    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:35:05.564006    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:35:05.564014    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:35:05.598379    4242 logs.go:123] Gathering logs for kube-apiserver [b4b0ba48f60b] ...
	I1001 12:35:05.598387    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b0ba48f60b"
	I1001 12:35:05.612324    4242 logs.go:123] Gathering logs for etcd [4fffcaa9e400] ...
	I1001 12:35:05.612334    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fffcaa9e400"
	I1001 12:35:05.626291    4242 logs.go:123] Gathering logs for coredns [c3764113e7e4] ...
	I1001 12:35:05.626307    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3764113e7e4"
	I1001 12:35:05.637913    4242 logs.go:123] Gathering logs for kube-proxy [ae0380eb6ceb] ...
	I1001 12:35:05.637929    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae0380eb6ceb"
	I1001 12:35:05.649476    4242 logs.go:123] Gathering logs for kube-controller-manager [38b93891ecd6] ...
	I1001 12:35:05.649492    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38b93891ecd6"
	I1001 12:35:05.666914    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:35:05.666924    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:35:05.671778    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:35:05.671784    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:35:05.710136    4242 logs.go:123] Gathering logs for coredns [5e5e58a930ac] ...
	I1001 12:35:05.710149    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5e58a930ac"
	I1001 12:35:05.721612    4242 logs.go:123] Gathering logs for kube-scheduler [3430a5479e9c] ...
	I1001 12:35:05.721622    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3430a5479e9c"
	I1001 12:35:05.736811    4242 logs.go:123] Gathering logs for storage-provisioner [97631f54aa43] ...
	I1001 12:35:05.736824    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97631f54aa43"
	I1001 12:35:05.748672    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:35:05.748688    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:35:08.264043    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:35:08.113593    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:35:08.113862    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:35:08.137900    4721 logs.go:276] 2 containers: [956404de281e bc78f59fb2e5]
	I1001 12:35:08.138035    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:35:08.156586    4721 logs.go:276] 2 containers: [4d0f920ec84f 316e5a1a5aed]
	I1001 12:35:08.156684    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:35:08.170342    4721 logs.go:276] 1 containers: [d04375a2ee30]
	I1001 12:35:08.170432    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:35:08.181408    4721 logs.go:276] 2 containers: [c952b19735c2 7ad38fcc33d6]
	I1001 12:35:08.181488    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:35:08.193116    4721 logs.go:276] 1 containers: [2cdb05dca894]
	I1001 12:35:08.193207    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:35:08.204665    4721 logs.go:276] 2 containers: [ecbe68f7a6b4 d9956cf09477]
	I1001 12:35:08.204749    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:35:08.214899    4721 logs.go:276] 0 containers: []
	W1001 12:35:08.214913    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:35:08.214991    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:35:08.226214    4721 logs.go:276] 1 containers: [5cc1ba08286c]
	I1001 12:35:08.226234    4721 logs.go:123] Gathering logs for kube-proxy [2cdb05dca894] ...
	I1001 12:35:08.226240    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cdb05dca894"
	I1001 12:35:08.238229    4721 logs.go:123] Gathering logs for kube-scheduler [c952b19735c2] ...
	I1001 12:35:08.238240    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c952b19735c2"
	I1001 12:35:08.255649    4721 logs.go:123] Gathering logs for kube-controller-manager [ecbe68f7a6b4] ...
	I1001 12:35:08.255660    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecbe68f7a6b4"
	I1001 12:35:08.273551    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:35:08.273565    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:35:08.297199    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:35:08.297207    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:35:08.308830    4721 logs.go:123] Gathering logs for kube-apiserver [bc78f59fb2e5] ...
	I1001 12:35:08.308840    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc78f59fb2e5"
	I1001 12:35:08.346148    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:35:08.346163    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:35:08.350470    4721 logs.go:123] Gathering logs for kube-apiserver [956404de281e] ...
	I1001 12:35:08.350477    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 956404de281e"
	I1001 12:35:08.373936    4721 logs.go:123] Gathering logs for etcd [316e5a1a5aed] ...
	I1001 12:35:08.373950    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e5a1a5aed"
	I1001 12:35:08.392005    4721 logs.go:123] Gathering logs for kube-controller-manager [d9956cf09477] ...
	I1001 12:35:08.392020    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9956cf09477"
	I1001 12:35:08.407771    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:35:08.407787    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:35:08.444353    4721 logs.go:123] Gathering logs for etcd [4d0f920ec84f] ...
	I1001 12:35:08.444367    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d0f920ec84f"
	I1001 12:35:08.458120    4721 logs.go:123] Gathering logs for coredns [d04375a2ee30] ...
	I1001 12:35:08.458133    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04375a2ee30"
	I1001 12:35:08.468688    4721 logs.go:123] Gathering logs for kube-scheduler [7ad38fcc33d6] ...
	I1001 12:35:08.468700    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad38fcc33d6"
	I1001 12:35:08.485531    4721 logs.go:123] Gathering logs for storage-provisioner [5cc1ba08286c] ...
	I1001 12:35:08.485545    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc1ba08286c"
	I1001 12:35:08.496792    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:35:08.496804    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:35:11.033221    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:35:13.265864    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:35:13.265986    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:35:13.280765    4242 logs.go:276] 1 containers: [b4b0ba48f60b]
	I1001 12:35:13.280852    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:35:13.291199    4242 logs.go:276] 1 containers: [4fffcaa9e400]
	I1001 12:35:13.291274    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:35:13.306105    4242 logs.go:276] 2 containers: [5e5e58a930ac c3764113e7e4]
	I1001 12:35:13.306187    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:35:13.316238    4242 logs.go:276] 1 containers: [3430a5479e9c]
	I1001 12:35:13.316318    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:35:13.327003    4242 logs.go:276] 1 containers: [ae0380eb6ceb]
	I1001 12:35:13.327087    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:35:13.337573    4242 logs.go:276] 1 containers: [38b93891ecd6]
	I1001 12:35:13.337659    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:35:13.347522    4242 logs.go:276] 0 containers: []
	W1001 12:35:13.347539    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:35:13.347600    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:35:13.357655    4242 logs.go:276] 1 containers: [97631f54aa43]
	I1001 12:35:13.357671    4242 logs.go:123] Gathering logs for coredns [c3764113e7e4] ...
	I1001 12:35:13.357676    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3764113e7e4"
	I1001 12:35:13.374205    4242 logs.go:123] Gathering logs for kube-scheduler [3430a5479e9c] ...
	I1001 12:35:13.374215    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3430a5479e9c"
	I1001 12:35:13.394413    4242 logs.go:123] Gathering logs for kube-proxy [ae0380eb6ceb] ...
	I1001 12:35:13.394426    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae0380eb6ceb"
	I1001 12:35:13.405599    4242 logs.go:123] Gathering logs for kube-controller-manager [38b93891ecd6] ...
	I1001 12:35:13.405612    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38b93891ecd6"
	I1001 12:35:13.422376    4242 logs.go:123] Gathering logs for storage-provisioner [97631f54aa43] ...
	I1001 12:35:13.422392    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97631f54aa43"
	I1001 12:35:13.433775    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:35:13.433788    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:35:13.438526    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:35:13.438536    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:35:13.474655    4242 logs.go:123] Gathering logs for coredns [5e5e58a930ac] ...
	I1001 12:35:13.474667    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5e58a930ac"
	I1001 12:35:13.486509    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:35:13.486521    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:35:13.498485    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:35:13.498500    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:35:13.522052    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:35:13.522061    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:35:13.559267    4242 logs.go:123] Gathering logs for kube-apiserver [b4b0ba48f60b] ...
	I1001 12:35:13.559282    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b0ba48f60b"
	I1001 12:35:13.573650    4242 logs.go:123] Gathering logs for etcd [4fffcaa9e400] ...
	I1001 12:35:13.573665    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fffcaa9e400"
	I1001 12:35:16.035415    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:35:16.035716    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:35:16.060915    4721 logs.go:276] 2 containers: [956404de281e bc78f59fb2e5]
	I1001 12:35:16.061043    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:35:16.078745    4721 logs.go:276] 2 containers: [4d0f920ec84f 316e5a1a5aed]
	I1001 12:35:16.078880    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:35:16.091401    4721 logs.go:276] 1 containers: [d04375a2ee30]
	I1001 12:35:16.091479    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:35:16.106572    4721 logs.go:276] 2 containers: [c952b19735c2 7ad38fcc33d6]
	I1001 12:35:16.106661    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:35:16.116946    4721 logs.go:276] 1 containers: [2cdb05dca894]
	I1001 12:35:16.117036    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:35:16.127573    4721 logs.go:276] 2 containers: [ecbe68f7a6b4 d9956cf09477]
	I1001 12:35:16.127662    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:35:16.138673    4721 logs.go:276] 0 containers: []
	W1001 12:35:16.138688    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:35:16.138757    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:35:16.150253    4721 logs.go:276] 1 containers: [5cc1ba08286c]
	I1001 12:35:16.150270    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:35:16.150277    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:35:16.188705    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:35:16.188716    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:35:16.192971    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:35:16.192980    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:35:16.228080    4721 logs.go:123] Gathering logs for kube-proxy [2cdb05dca894] ...
	I1001 12:35:16.228094    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cdb05dca894"
	I1001 12:35:16.244706    4721 logs.go:123] Gathering logs for kube-controller-manager [d9956cf09477] ...
	I1001 12:35:16.244718    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9956cf09477"
	I1001 12:35:16.257223    4721 logs.go:123] Gathering logs for kube-apiserver [956404de281e] ...
	I1001 12:35:16.257235    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 956404de281e"
	I1001 12:35:16.271274    4721 logs.go:123] Gathering logs for kube-apiserver [bc78f59fb2e5] ...
	I1001 12:35:16.271288    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc78f59fb2e5"
	I1001 12:35:16.308909    4721 logs.go:123] Gathering logs for kube-scheduler [c952b19735c2] ...
	I1001 12:35:16.308920    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c952b19735c2"
	I1001 12:35:16.325657    4721 logs.go:123] Gathering logs for kube-scheduler [7ad38fcc33d6] ...
	I1001 12:35:16.325668    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad38fcc33d6"
	I1001 12:35:16.340235    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:35:16.340248    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:35:16.352266    4721 logs.go:123] Gathering logs for etcd [4d0f920ec84f] ...
	I1001 12:35:16.352277    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d0f920ec84f"
	I1001 12:35:16.366042    4721 logs.go:123] Gathering logs for etcd [316e5a1a5aed] ...
	I1001 12:35:16.366052    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e5a1a5aed"
	I1001 12:35:16.380869    4721 logs.go:123] Gathering logs for kube-controller-manager [ecbe68f7a6b4] ...
	I1001 12:35:16.380880    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecbe68f7a6b4"
	I1001 12:35:16.401892    4721 logs.go:123] Gathering logs for coredns [d04375a2ee30] ...
	I1001 12:35:16.401903    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04375a2ee30"
	I1001 12:35:16.413167    4721 logs.go:123] Gathering logs for storage-provisioner [5cc1ba08286c] ...
	I1001 12:35:16.413179    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc1ba08286c"
	I1001 12:35:16.424267    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:35:16.424277    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:35:16.090396    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:35:18.948665    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:35:21.092557    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:35:21.093142    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:35:21.132238    4242 logs.go:276] 1 containers: [b4b0ba48f60b]
	I1001 12:35:21.132394    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:35:21.158742    4242 logs.go:276] 1 containers: [4fffcaa9e400]
	I1001 12:35:21.158852    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:35:21.172868    4242 logs.go:276] 2 containers: [5e5e58a930ac c3764113e7e4]
	I1001 12:35:21.172962    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:35:21.186101    4242 logs.go:276] 1 containers: [3430a5479e9c]
	I1001 12:35:21.186189    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:35:21.197103    4242 logs.go:276] 1 containers: [ae0380eb6ceb]
	I1001 12:35:21.197178    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:35:21.207758    4242 logs.go:276] 1 containers: [38b93891ecd6]
	I1001 12:35:21.207842    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:35:21.218278    4242 logs.go:276] 0 containers: []
	W1001 12:35:21.218289    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:35:21.218361    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:35:21.228470    4242 logs.go:276] 1 containers: [97631f54aa43]
	I1001 12:35:21.228486    4242 logs.go:123] Gathering logs for storage-provisioner [97631f54aa43] ...
	I1001 12:35:21.228491    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97631f54aa43"
	I1001 12:35:21.240414    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:35:21.240430    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:35:21.276787    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:35:21.276798    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:35:21.281308    4242 logs.go:123] Gathering logs for coredns [5e5e58a930ac] ...
	I1001 12:35:21.281314    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5e58a930ac"
	I1001 12:35:21.293333    4242 logs.go:123] Gathering logs for coredns [c3764113e7e4] ...
	I1001 12:35:21.293345    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3764113e7e4"
	I1001 12:35:21.306993    4242 logs.go:123] Gathering logs for kube-scheduler [3430a5479e9c] ...
	I1001 12:35:21.307003    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3430a5479e9c"
	I1001 12:35:21.322078    4242 logs.go:123] Gathering logs for kube-proxy [ae0380eb6ceb] ...
	I1001 12:35:21.322088    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae0380eb6ceb"
	I1001 12:35:21.333995    4242 logs.go:123] Gathering logs for kube-controller-manager [38b93891ecd6] ...
	I1001 12:35:21.334011    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38b93891ecd6"
	I1001 12:35:21.351158    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:35:21.351169    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:35:21.376343    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:35:21.376351    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:35:21.414841    4242 logs.go:123] Gathering logs for kube-apiserver [b4b0ba48f60b] ...
	I1001 12:35:21.414858    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b0ba48f60b"
	I1001 12:35:21.428929    4242 logs.go:123] Gathering logs for etcd [4fffcaa9e400] ...
	I1001 12:35:21.428939    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fffcaa9e400"
	I1001 12:35:21.442885    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:35:21.442901    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:35:23.956546    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:35:23.950591    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:35:23.951122    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:35:23.993515    4721 logs.go:276] 2 containers: [956404de281e bc78f59fb2e5]
	I1001 12:35:23.993680    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:35:24.023152    4721 logs.go:276] 2 containers: [4d0f920ec84f 316e5a1a5aed]
	I1001 12:35:24.023257    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:35:24.035726    4721 logs.go:276] 1 containers: [d04375a2ee30]
	I1001 12:35:24.035810    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:35:24.049887    4721 logs.go:276] 2 containers: [c952b19735c2 7ad38fcc33d6]
	I1001 12:35:24.049971    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:35:24.060636    4721 logs.go:276] 1 containers: [2cdb05dca894]
	I1001 12:35:24.060747    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:35:24.071174    4721 logs.go:276] 2 containers: [ecbe68f7a6b4 d9956cf09477]
	I1001 12:35:24.071260    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:35:24.081672    4721 logs.go:276] 0 containers: []
	W1001 12:35:24.081690    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:35:24.081763    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:35:24.092344    4721 logs.go:276] 1 containers: [5cc1ba08286c]
	I1001 12:35:24.092361    4721 logs.go:123] Gathering logs for kube-scheduler [c952b19735c2] ...
	I1001 12:35:24.092366    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c952b19735c2"
	I1001 12:35:24.112213    4721 logs.go:123] Gathering logs for kube-scheduler [7ad38fcc33d6] ...
	I1001 12:35:24.112225    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad38fcc33d6"
	I1001 12:35:24.127483    4721 logs.go:123] Gathering logs for kube-controller-manager [ecbe68f7a6b4] ...
	I1001 12:35:24.127497    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecbe68f7a6b4"
	I1001 12:35:24.150418    4721 logs.go:123] Gathering logs for kube-controller-manager [d9956cf09477] ...
	I1001 12:35:24.150430    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9956cf09477"
	I1001 12:35:24.163620    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:35:24.163637    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:35:24.167872    4721 logs.go:123] Gathering logs for kube-apiserver [956404de281e] ...
	I1001 12:35:24.167882    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 956404de281e"
	I1001 12:35:24.182534    4721 logs.go:123] Gathering logs for kube-apiserver [bc78f59fb2e5] ...
	I1001 12:35:24.182546    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc78f59fb2e5"
	I1001 12:35:24.222453    4721 logs.go:123] Gathering logs for etcd [4d0f920ec84f] ...
	I1001 12:35:24.222467    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d0f920ec84f"
	I1001 12:35:24.236151    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:35:24.236161    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:35:24.272083    4721 logs.go:123] Gathering logs for coredns [d04375a2ee30] ...
	I1001 12:35:24.272100    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04375a2ee30"
	I1001 12:35:24.283827    4721 logs.go:123] Gathering logs for kube-proxy [2cdb05dca894] ...
	I1001 12:35:24.283839    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cdb05dca894"
	I1001 12:35:24.295704    4721 logs.go:123] Gathering logs for storage-provisioner [5cc1ba08286c] ...
	I1001 12:35:24.295718    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc1ba08286c"
	I1001 12:35:24.307108    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:35:24.307121    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:35:24.342812    4721 logs.go:123] Gathering logs for etcd [316e5a1a5aed] ...
	I1001 12:35:24.342819    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e5a1a5aed"
	I1001 12:35:24.356730    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:35:24.356742    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:35:24.380217    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:35:24.380227    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:35:26.894855    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:35:28.959336    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:35:28.959861    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:35:28.999202    4242 logs.go:276] 1 containers: [b4b0ba48f60b]
	I1001 12:35:28.999378    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:35:29.020723    4242 logs.go:276] 1 containers: [4fffcaa9e400]
	I1001 12:35:29.020842    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:35:29.036696    4242 logs.go:276] 2 containers: [5e5e58a930ac c3764113e7e4]
	I1001 12:35:29.036798    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:35:29.049570    4242 logs.go:276] 1 containers: [3430a5479e9c]
	I1001 12:35:29.049665    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:35:29.060591    4242 logs.go:276] 1 containers: [ae0380eb6ceb]
	I1001 12:35:29.060678    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:35:29.071524    4242 logs.go:276] 1 containers: [38b93891ecd6]
	I1001 12:35:29.071610    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:35:29.081752    4242 logs.go:276] 0 containers: []
	W1001 12:35:29.081767    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:35:29.081839    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:35:29.092108    4242 logs.go:276] 1 containers: [97631f54aa43]
	I1001 12:35:29.092124    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:35:29.092130    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:35:29.096607    4242 logs.go:123] Gathering logs for etcd [4fffcaa9e400] ...
	I1001 12:35:29.096614    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fffcaa9e400"
	I1001 12:35:29.110806    4242 logs.go:123] Gathering logs for coredns [5e5e58a930ac] ...
	I1001 12:35:29.110816    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5e58a930ac"
	I1001 12:35:29.122958    4242 logs.go:123] Gathering logs for kube-scheduler [3430a5479e9c] ...
	I1001 12:35:29.122968    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3430a5479e9c"
	I1001 12:35:29.138939    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:35:29.138952    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:35:29.151841    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:35:29.151854    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:35:29.178088    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:35:29.178101    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:35:29.213952    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:35:29.213961    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:35:29.252318    4242 logs.go:123] Gathering logs for kube-apiserver [b4b0ba48f60b] ...
	I1001 12:35:29.252333    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b0ba48f60b"
	I1001 12:35:29.266816    4242 logs.go:123] Gathering logs for coredns [c3764113e7e4] ...
	I1001 12:35:29.266833    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3764113e7e4"
	I1001 12:35:29.278925    4242 logs.go:123] Gathering logs for kube-proxy [ae0380eb6ceb] ...
	I1001 12:35:29.278937    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae0380eb6ceb"
	I1001 12:35:29.291144    4242 logs.go:123] Gathering logs for kube-controller-manager [38b93891ecd6] ...
	I1001 12:35:29.291155    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38b93891ecd6"
	I1001 12:35:29.309390    4242 logs.go:123] Gathering logs for storage-provisioner [97631f54aa43] ...
	I1001 12:35:29.309401    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97631f54aa43"
	I1001 12:35:31.897269    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:35:31.897651    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:35:31.929185    4721 logs.go:276] 2 containers: [956404de281e bc78f59fb2e5]
	I1001 12:35:31.929348    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:35:31.948202    4721 logs.go:276] 2 containers: [4d0f920ec84f 316e5a1a5aed]
	I1001 12:35:31.948318    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:35:31.962926    4721 logs.go:276] 1 containers: [d04375a2ee30]
	I1001 12:35:31.963024    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:35:31.975511    4721 logs.go:276] 2 containers: [c952b19735c2 7ad38fcc33d6]
	I1001 12:35:31.975595    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:35:31.986105    4721 logs.go:276] 1 containers: [2cdb05dca894]
	I1001 12:35:31.986193    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:35:31.996975    4721 logs.go:276] 2 containers: [ecbe68f7a6b4 d9956cf09477]
	I1001 12:35:31.997057    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:35:32.014713    4721 logs.go:276] 0 containers: []
	W1001 12:35:32.014725    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:35:32.014803    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:35:32.026881    4721 logs.go:276] 1 containers: [5cc1ba08286c]
	I1001 12:35:32.026900    4721 logs.go:123] Gathering logs for kube-proxy [2cdb05dca894] ...
	I1001 12:35:32.026906    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cdb05dca894"
	I1001 12:35:32.038622    4721 logs.go:123] Gathering logs for kube-controller-manager [d9956cf09477] ...
	I1001 12:35:32.038637    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9956cf09477"
	I1001 12:35:32.052363    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:35:32.052375    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:35:32.065302    4721 logs.go:123] Gathering logs for kube-apiserver [bc78f59fb2e5] ...
	I1001 12:35:32.065314    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc78f59fb2e5"
	I1001 12:35:32.110744    4721 logs.go:123] Gathering logs for etcd [4d0f920ec84f] ...
	I1001 12:35:32.110757    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d0f920ec84f"
	I1001 12:35:32.124791    4721 logs.go:123] Gathering logs for etcd [316e5a1a5aed] ...
	I1001 12:35:32.124805    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e5a1a5aed"
	I1001 12:35:32.139196    4721 logs.go:123] Gathering logs for kube-scheduler [7ad38fcc33d6] ...
	I1001 12:35:32.139212    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad38fcc33d6"
	I1001 12:35:32.154410    4721 logs.go:123] Gathering logs for kube-controller-manager [ecbe68f7a6b4] ...
	I1001 12:35:32.154422    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecbe68f7a6b4"
	I1001 12:35:32.172014    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:35:32.172024    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:35:32.211613    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:35:32.211624    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:35:31.822815    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:35:32.236201    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:35:32.236211    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:35:32.270943    4721 logs.go:123] Gathering logs for kube-apiserver [956404de281e] ...
	I1001 12:35:32.270955    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 956404de281e"
	I1001 12:35:32.284957    4721 logs.go:123] Gathering logs for coredns [d04375a2ee30] ...
	I1001 12:35:32.284972    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04375a2ee30"
	I1001 12:35:32.297617    4721 logs.go:123] Gathering logs for kube-scheduler [c952b19735c2] ...
	I1001 12:35:32.297631    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c952b19735c2"
	I1001 12:35:32.309549    4721 logs.go:123] Gathering logs for storage-provisioner [5cc1ba08286c] ...
	I1001 12:35:32.309562    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc1ba08286c"
	I1001 12:35:32.321455    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:35:32.321470    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:35:34.826636    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:35:36.825111    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:35:36.825344    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:35:36.843600    4242 logs.go:276] 1 containers: [b4b0ba48f60b]
	I1001 12:35:36.843713    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:35:36.857892    4242 logs.go:276] 1 containers: [4fffcaa9e400]
	I1001 12:35:36.857982    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:35:36.869513    4242 logs.go:276] 2 containers: [5e5e58a930ac c3764113e7e4]
	I1001 12:35:36.869591    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:35:36.880173    4242 logs.go:276] 1 containers: [3430a5479e9c]
	I1001 12:35:36.880260    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:35:36.890460    4242 logs.go:276] 1 containers: [ae0380eb6ceb]
	I1001 12:35:36.890540    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:35:36.900950    4242 logs.go:276] 1 containers: [38b93891ecd6]
	I1001 12:35:36.901035    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:35:36.911888    4242 logs.go:276] 0 containers: []
	W1001 12:35:36.911900    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:35:36.911977    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:35:36.921903    4242 logs.go:276] 1 containers: [97631f54aa43]
	I1001 12:35:36.921919    4242 logs.go:123] Gathering logs for kube-apiserver [b4b0ba48f60b] ...
	I1001 12:35:36.921925    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b0ba48f60b"
	I1001 12:35:36.935861    4242 logs.go:123] Gathering logs for etcd [4fffcaa9e400] ...
	I1001 12:35:36.935872    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fffcaa9e400"
	I1001 12:35:36.949848    4242 logs.go:123] Gathering logs for coredns [5e5e58a930ac] ...
	I1001 12:35:36.949859    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5e58a930ac"
	I1001 12:35:36.961439    4242 logs.go:123] Gathering logs for coredns [c3764113e7e4] ...
	I1001 12:35:36.961449    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3764113e7e4"
	I1001 12:35:36.976581    4242 logs.go:123] Gathering logs for kube-scheduler [3430a5479e9c] ...
	I1001 12:35:36.976598    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3430a5479e9c"
	I1001 12:35:36.993081    4242 logs.go:123] Gathering logs for kube-proxy [ae0380eb6ceb] ...
	I1001 12:35:36.993093    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae0380eb6ceb"
	I1001 12:35:37.004812    4242 logs.go:123] Gathering logs for kube-controller-manager [38b93891ecd6] ...
	I1001 12:35:37.004822    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38b93891ecd6"
	I1001 12:35:37.022023    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:35:37.022034    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:35:37.059086    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:35:37.059097    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:35:37.070643    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:35:37.070656    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:35:37.094481    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:35:37.094489    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:35:37.130619    4242 logs.go:123] Gathering logs for storage-provisioner [97631f54aa43] ...
	I1001 12:35:37.130635    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97631f54aa43"
	I1001 12:35:37.144974    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:35:37.144989    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:35:39.651695    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:35:39.829166    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:35:39.829331    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:35:39.842208    4721 logs.go:276] 2 containers: [956404de281e bc78f59fb2e5]
	I1001 12:35:39.842299    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:35:39.853734    4721 logs.go:276] 2 containers: [4d0f920ec84f 316e5a1a5aed]
	I1001 12:35:39.853821    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:35:39.867595    4721 logs.go:276] 1 containers: [d04375a2ee30]
	I1001 12:35:39.867684    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:35:39.877663    4721 logs.go:276] 2 containers: [c952b19735c2 7ad38fcc33d6]
	I1001 12:35:39.877746    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:35:39.892947    4721 logs.go:276] 1 containers: [2cdb05dca894]
	I1001 12:35:39.893035    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:35:39.903517    4721 logs.go:276] 2 containers: [ecbe68f7a6b4 d9956cf09477]
	I1001 12:35:39.903595    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:35:39.913636    4721 logs.go:276] 0 containers: []
	W1001 12:35:39.913649    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:35:39.913727    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:35:39.924021    4721 logs.go:276] 1 containers: [5cc1ba08286c]
	I1001 12:35:39.924041    4721 logs.go:123] Gathering logs for kube-apiserver [956404de281e] ...
	I1001 12:35:39.924046    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 956404de281e"
	I1001 12:35:39.939071    4721 logs.go:123] Gathering logs for etcd [4d0f920ec84f] ...
	I1001 12:35:39.939087    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d0f920ec84f"
	I1001 12:35:39.959182    4721 logs.go:123] Gathering logs for kube-controller-manager [ecbe68f7a6b4] ...
	I1001 12:35:39.959193    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecbe68f7a6b4"
	I1001 12:35:39.976076    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:35:39.976091    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:35:39.980534    4721 logs.go:123] Gathering logs for coredns [d04375a2ee30] ...
	I1001 12:35:39.980540    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04375a2ee30"
	I1001 12:35:39.991657    4721 logs.go:123] Gathering logs for kube-scheduler [c952b19735c2] ...
	I1001 12:35:39.991668    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c952b19735c2"
	I1001 12:35:40.003113    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:35:40.003127    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:35:40.026964    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:35:40.026973    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:35:40.061999    4721 logs.go:123] Gathering logs for etcd [316e5a1a5aed] ...
	I1001 12:35:40.062013    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e5a1a5aed"
	I1001 12:35:40.076816    4721 logs.go:123] Gathering logs for kube-scheduler [7ad38fcc33d6] ...
	I1001 12:35:40.076829    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad38fcc33d6"
	I1001 12:35:40.092032    4721 logs.go:123] Gathering logs for kube-proxy [2cdb05dca894] ...
	I1001 12:35:40.092048    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cdb05dca894"
	I1001 12:35:40.103752    4721 logs.go:123] Gathering logs for kube-controller-manager [d9956cf09477] ...
	I1001 12:35:40.103762    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9956cf09477"
	I1001 12:35:40.115956    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:35:40.115967    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:35:40.134745    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:35:40.134761    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:35:40.173793    4721 logs.go:123] Gathering logs for kube-apiserver [bc78f59fb2e5] ...
	I1001 12:35:40.173813    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc78f59fb2e5"
	I1001 12:35:40.217059    4721 logs.go:123] Gathering logs for storage-provisioner [5cc1ba08286c] ...
	I1001 12:35:40.217075    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc1ba08286c"
	I1001 12:35:44.654279    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:35:44.654758    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:35:44.687058    4242 logs.go:276] 1 containers: [b4b0ba48f60b]
	I1001 12:35:44.687240    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:35:44.706645    4242 logs.go:276] 1 containers: [4fffcaa9e400]
	I1001 12:35:44.706761    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:35:42.729941    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:35:44.726125    4242 logs.go:276] 2 containers: [5e5e58a930ac c3764113e7e4]
	I1001 12:35:44.726226    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:35:44.745151    4242 logs.go:276] 1 containers: [3430a5479e9c]
	I1001 12:35:44.745238    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:35:44.755848    4242 logs.go:276] 1 containers: [ae0380eb6ceb]
	I1001 12:35:44.755936    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:35:44.766519    4242 logs.go:276] 1 containers: [38b93891ecd6]
	I1001 12:35:44.766598    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:35:44.776493    4242 logs.go:276] 0 containers: []
	W1001 12:35:44.776509    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:35:44.776586    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:35:44.787179    4242 logs.go:276] 1 containers: [97631f54aa43]
	I1001 12:35:44.787194    4242 logs.go:123] Gathering logs for storage-provisioner [97631f54aa43] ...
	I1001 12:35:44.787199    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97631f54aa43"
	I1001 12:35:44.799282    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:35:44.799294    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:35:44.810739    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:35:44.810756    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:35:44.845172    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:35:44.845180    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:35:44.850574    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:35:44.850584    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:35:44.886074    4242 logs.go:123] Gathering logs for etcd [4fffcaa9e400] ...
	I1001 12:35:44.886087    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fffcaa9e400"
	I1001 12:35:44.899734    4242 logs.go:123] Gathering logs for coredns [c3764113e7e4] ...
	I1001 12:35:44.899749    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3764113e7e4"
	I1001 12:35:44.911284    4242 logs.go:123] Gathering logs for kube-controller-manager [38b93891ecd6] ...
	I1001 12:35:44.911294    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38b93891ecd6"
	I1001 12:35:44.928535    4242 logs.go:123] Gathering logs for kube-apiserver [b4b0ba48f60b] ...
	I1001 12:35:44.928548    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b0ba48f60b"
	I1001 12:35:44.943675    4242 logs.go:123] Gathering logs for coredns [5e5e58a930ac] ...
	I1001 12:35:44.943691    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5e58a930ac"
	I1001 12:35:44.957961    4242 logs.go:123] Gathering logs for kube-scheduler [3430a5479e9c] ...
	I1001 12:35:44.957974    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3430a5479e9c"
	I1001 12:35:44.973907    4242 logs.go:123] Gathering logs for kube-proxy [ae0380eb6ceb] ...
	I1001 12:35:44.973921    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae0380eb6ceb"
	I1001 12:35:44.985205    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:35:44.985218    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:35:47.510689    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:35:47.732080    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:35:47.732304    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:35:47.752330    4721 logs.go:276] 2 containers: [956404de281e bc78f59fb2e5]
	I1001 12:35:47.752450    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:35:47.767310    4721 logs.go:276] 2 containers: [4d0f920ec84f 316e5a1a5aed]
	I1001 12:35:47.767409    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:35:47.778799    4721 logs.go:276] 1 containers: [d04375a2ee30]
	I1001 12:35:47.778882    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:35:47.789049    4721 logs.go:276] 2 containers: [c952b19735c2 7ad38fcc33d6]
	I1001 12:35:47.789124    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:35:47.802149    4721 logs.go:276] 1 containers: [2cdb05dca894]
	I1001 12:35:47.802233    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:35:47.819798    4721 logs.go:276] 2 containers: [ecbe68f7a6b4 d9956cf09477]
	I1001 12:35:47.819875    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:35:47.834491    4721 logs.go:276] 0 containers: []
	W1001 12:35:47.834501    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:35:47.834566    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:35:47.844701    4721 logs.go:276] 1 containers: [5cc1ba08286c]
	I1001 12:35:47.844719    4721 logs.go:123] Gathering logs for coredns [d04375a2ee30] ...
	I1001 12:35:47.844725    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04375a2ee30"
	I1001 12:35:47.856064    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:35:47.856075    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:35:47.879037    4721 logs.go:123] Gathering logs for kube-apiserver [956404de281e] ...
	I1001 12:35:47.879044    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 956404de281e"
	I1001 12:35:47.893982    4721 logs.go:123] Gathering logs for kube-scheduler [7ad38fcc33d6] ...
	I1001 12:35:47.893993    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad38fcc33d6"
	I1001 12:35:47.909296    4721 logs.go:123] Gathering logs for kube-proxy [2cdb05dca894] ...
	I1001 12:35:47.909306    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cdb05dca894"
	I1001 12:35:47.921099    4721 logs.go:123] Gathering logs for kube-controller-manager [d9956cf09477] ...
	I1001 12:35:47.921110    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9956cf09477"
	I1001 12:35:47.933373    4721 logs.go:123] Gathering logs for storage-provisioner [5cc1ba08286c] ...
	I1001 12:35:47.933386    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc1ba08286c"
	I1001 12:35:47.944750    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:35:47.944762    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:35:47.981264    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:35:47.981272    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:35:48.018666    4721 logs.go:123] Gathering logs for kube-controller-manager [ecbe68f7a6b4] ...
	I1001 12:35:48.018676    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecbe68f7a6b4"
	I1001 12:35:48.037797    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:35:48.037809    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:35:48.049506    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:35:48.049518    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:35:48.053597    4721 logs.go:123] Gathering logs for kube-apiserver [bc78f59fb2e5] ...
	I1001 12:35:48.053604    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc78f59fb2e5"
	I1001 12:35:48.092364    4721 logs.go:123] Gathering logs for etcd [4d0f920ec84f] ...
	I1001 12:35:48.092382    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d0f920ec84f"
	I1001 12:35:48.114216    4721 logs.go:123] Gathering logs for etcd [316e5a1a5aed] ...
	I1001 12:35:48.114229    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e5a1a5aed"
	I1001 12:35:48.130170    4721 logs.go:123] Gathering logs for kube-scheduler [c952b19735c2] ...
	I1001 12:35:48.130182    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c952b19735c2"
	I1001 12:35:50.645264    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:35:52.513074    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:35:52.513671    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:35:52.552216    4242 logs.go:276] 1 containers: [b4b0ba48f60b]
	I1001 12:35:52.552402    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:35:52.574009    4242 logs.go:276] 1 containers: [4fffcaa9e400]
	I1001 12:35:52.574152    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:35:52.588936    4242 logs.go:276] 2 containers: [5e5e58a930ac c3764113e7e4]
	I1001 12:35:52.589038    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:35:52.601610    4242 logs.go:276] 1 containers: [3430a5479e9c]
	I1001 12:35:52.601702    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:35:52.612809    4242 logs.go:276] 1 containers: [ae0380eb6ceb]
	I1001 12:35:52.612890    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:35:52.623406    4242 logs.go:276] 1 containers: [38b93891ecd6]
	I1001 12:35:52.623482    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:35:52.634141    4242 logs.go:276] 0 containers: []
	W1001 12:35:52.634160    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:35:52.634232    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:35:52.647188    4242 logs.go:276] 1 containers: [97631f54aa43]
	I1001 12:35:52.647202    4242 logs.go:123] Gathering logs for coredns [c3764113e7e4] ...
	I1001 12:35:52.647208    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3764113e7e4"
	I1001 12:35:52.664231    4242 logs.go:123] Gathering logs for kube-scheduler [3430a5479e9c] ...
	I1001 12:35:52.664242    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3430a5479e9c"
	I1001 12:35:52.679948    4242 logs.go:123] Gathering logs for kube-proxy [ae0380eb6ceb] ...
	I1001 12:35:52.679960    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae0380eb6ceb"
	I1001 12:35:52.696374    4242 logs.go:123] Gathering logs for kube-controller-manager [38b93891ecd6] ...
	I1001 12:35:52.696385    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38b93891ecd6"
	I1001 12:35:52.715484    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:35:52.715495    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:35:52.751797    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:35:52.751805    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:35:52.756217    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:35:52.756225    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:35:52.794127    4242 logs.go:123] Gathering logs for storage-provisioner [97631f54aa43] ...
	I1001 12:35:52.794142    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97631f54aa43"
	I1001 12:35:52.806884    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:35:52.806896    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:35:52.830232    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:35:52.830241    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:35:52.841194    4242 logs.go:123] Gathering logs for kube-apiserver [b4b0ba48f60b] ...
	I1001 12:35:52.841207    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b0ba48f60b"
	I1001 12:35:52.859483    4242 logs.go:123] Gathering logs for etcd [4fffcaa9e400] ...
	I1001 12:35:52.859494    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fffcaa9e400"
	I1001 12:35:52.873111    4242 logs.go:123] Gathering logs for coredns [5e5e58a930ac] ...
	I1001 12:35:52.873121    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5e58a930ac"
	I1001 12:35:55.647073    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:35:55.647348    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:35:55.668443    4721 logs.go:276] 2 containers: [956404de281e bc78f59fb2e5]
	I1001 12:35:55.668572    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:35:55.683883    4721 logs.go:276] 2 containers: [4d0f920ec84f 316e5a1a5aed]
	I1001 12:35:55.683987    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:35:55.696513    4721 logs.go:276] 1 containers: [d04375a2ee30]
	I1001 12:35:55.696598    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:35:55.709055    4721 logs.go:276] 2 containers: [c952b19735c2 7ad38fcc33d6]
	I1001 12:35:55.709147    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:35:55.719618    4721 logs.go:276] 1 containers: [2cdb05dca894]
	I1001 12:35:55.719710    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:35:55.730533    4721 logs.go:276] 2 containers: [ecbe68f7a6b4 d9956cf09477]
	I1001 12:35:55.730618    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:35:55.740518    4721 logs.go:276] 0 containers: []
	W1001 12:35:55.740533    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:35:55.740610    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:35:55.751330    4721 logs.go:276] 1 containers: [5cc1ba08286c]
	I1001 12:35:55.751347    4721 logs.go:123] Gathering logs for coredns [d04375a2ee30] ...
	I1001 12:35:55.751353    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04375a2ee30"
	I1001 12:35:55.762121    4721 logs.go:123] Gathering logs for kube-scheduler [c952b19735c2] ...
	I1001 12:35:55.762133    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c952b19735c2"
	I1001 12:35:55.773431    4721 logs.go:123] Gathering logs for kube-scheduler [7ad38fcc33d6] ...
	I1001 12:35:55.773444    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad38fcc33d6"
	I1001 12:35:55.788290    4721 logs.go:123] Gathering logs for kube-apiserver [bc78f59fb2e5] ...
	I1001 12:35:55.788300    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc78f59fb2e5"
	I1001 12:35:55.830340    4721 logs.go:123] Gathering logs for kube-controller-manager [d9956cf09477] ...
	I1001 12:35:55.830356    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9956cf09477"
	I1001 12:35:55.842931    4721 logs.go:123] Gathering logs for storage-provisioner [5cc1ba08286c] ...
	I1001 12:35:55.842941    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc1ba08286c"
	I1001 12:35:55.854331    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:35:55.854340    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:35:55.870936    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:35:55.870948    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:35:55.908945    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:35:55.908953    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:35:55.912905    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:35:55.912915    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:35:55.948080    4721 logs.go:123] Gathering logs for kube-apiserver [956404de281e] ...
	I1001 12:35:55.948093    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 956404de281e"
	I1001 12:35:55.963396    4721 logs.go:123] Gathering logs for etcd [316e5a1a5aed] ...
	I1001 12:35:55.963409    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e5a1a5aed"
	I1001 12:35:55.978616    4721 logs.go:123] Gathering logs for kube-proxy [2cdb05dca894] ...
	I1001 12:35:55.978630    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cdb05dca894"
	I1001 12:35:55.994273    4721 logs.go:123] Gathering logs for etcd [4d0f920ec84f] ...
	I1001 12:35:55.994285    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d0f920ec84f"
	I1001 12:35:56.007834    4721 logs.go:123] Gathering logs for kube-controller-manager [ecbe68f7a6b4] ...
	I1001 12:35:56.007851    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecbe68f7a6b4"
	I1001 12:35:56.025664    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:35:56.025676    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:35:55.386841    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:35:58.551537    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:36:00.388199    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:36:00.388397    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:36:00.401553    4242 logs.go:276] 1 containers: [b4b0ba48f60b]
	I1001 12:36:00.401647    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:36:00.412254    4242 logs.go:276] 1 containers: [4fffcaa9e400]
	I1001 12:36:00.412343    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:36:00.424979    4242 logs.go:276] 4 containers: [f312b9c9ac08 1242378878f5 5e5e58a930ac c3764113e7e4]
	I1001 12:36:00.425054    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:36:00.435456    4242 logs.go:276] 1 containers: [3430a5479e9c]
	I1001 12:36:00.435539    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:36:00.446504    4242 logs.go:276] 1 containers: [ae0380eb6ceb]
	I1001 12:36:00.446592    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:36:00.457245    4242 logs.go:276] 1 containers: [38b93891ecd6]
	I1001 12:36:00.457322    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:36:00.467591    4242 logs.go:276] 0 containers: []
	W1001 12:36:00.467605    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:36:00.467676    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:36:00.478298    4242 logs.go:276] 1 containers: [97631f54aa43]
	I1001 12:36:00.478315    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:36:00.478320    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:36:00.489851    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:36:00.489862    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:36:00.526237    4242 logs.go:123] Gathering logs for kube-proxy [ae0380eb6ceb] ...
	I1001 12:36:00.526248    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae0380eb6ceb"
	I1001 12:36:00.538048    4242 logs.go:123] Gathering logs for kube-scheduler [3430a5479e9c] ...
	I1001 12:36:00.538060    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3430a5479e9c"
	I1001 12:36:00.556941    4242 logs.go:123] Gathering logs for storage-provisioner [97631f54aa43] ...
	I1001 12:36:00.556952    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97631f54aa43"
	I1001 12:36:00.571755    4242 logs.go:123] Gathering logs for kube-apiserver [b4b0ba48f60b] ...
	I1001 12:36:00.571766    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b0ba48f60b"
	I1001 12:36:00.586154    4242 logs.go:123] Gathering logs for coredns [5e5e58a930ac] ...
	I1001 12:36:00.586165    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5e58a930ac"
	I1001 12:36:00.598498    4242 logs.go:123] Gathering logs for etcd [4fffcaa9e400] ...
	I1001 12:36:00.598511    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fffcaa9e400"
	I1001 12:36:00.613442    4242 logs.go:123] Gathering logs for coredns [f312b9c9ac08] ...
	I1001 12:36:00.613453    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f312b9c9ac08"
	I1001 12:36:00.628584    4242 logs.go:123] Gathering logs for coredns [c3764113e7e4] ...
	I1001 12:36:00.628594    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3764113e7e4"
	I1001 12:36:00.639953    4242 logs.go:123] Gathering logs for kube-controller-manager [38b93891ecd6] ...
	I1001 12:36:00.639962    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38b93891ecd6"
	I1001 12:36:00.657693    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:36:00.657704    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:36:00.682313    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:36:00.682323    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:36:00.686802    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:36:00.686815    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:36:00.721970    4242 logs.go:123] Gathering logs for coredns [1242378878f5] ...
	I1001 12:36:00.721986    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1242378878f5"
	I1001 12:36:03.235101    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:36:03.553964    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:36:03.554217    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:36:03.573170    4721 logs.go:276] 2 containers: [956404de281e bc78f59fb2e5]
	I1001 12:36:03.573286    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:36:03.586920    4721 logs.go:276] 2 containers: [4d0f920ec84f 316e5a1a5aed]
	I1001 12:36:03.587002    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:36:03.599156    4721 logs.go:276] 1 containers: [d04375a2ee30]
	I1001 12:36:03.599251    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:36:03.609482    4721 logs.go:276] 2 containers: [c952b19735c2 7ad38fcc33d6]
	I1001 12:36:03.609562    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:36:03.623377    4721 logs.go:276] 1 containers: [2cdb05dca894]
	I1001 12:36:03.623470    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:36:03.634371    4721 logs.go:276] 2 containers: [ecbe68f7a6b4 d9956cf09477]
	I1001 12:36:03.634453    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:36:03.644847    4721 logs.go:276] 0 containers: []
	W1001 12:36:03.644865    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:36:03.644930    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:36:03.655383    4721 logs.go:276] 1 containers: [5cc1ba08286c]
	I1001 12:36:03.655401    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:36:03.655407    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:36:03.659800    4721 logs.go:123] Gathering logs for kube-apiserver [bc78f59fb2e5] ...
	I1001 12:36:03.659807    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc78f59fb2e5"
	I1001 12:36:03.699046    4721 logs.go:123] Gathering logs for kube-proxy [2cdb05dca894] ...
	I1001 12:36:03.699058    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cdb05dca894"
	I1001 12:36:03.711174    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:36:03.711184    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:36:03.724445    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:36:03.724458    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:36:03.764048    4721 logs.go:123] Gathering logs for etcd [4d0f920ec84f] ...
	I1001 12:36:03.764056    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d0f920ec84f"
	I1001 12:36:03.777842    4721 logs.go:123] Gathering logs for etcd [316e5a1a5aed] ...
	I1001 12:36:03.777855    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e5a1a5aed"
	I1001 12:36:03.804461    4721 logs.go:123] Gathering logs for kube-scheduler [c952b19735c2] ...
	I1001 12:36:03.804477    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c952b19735c2"
	I1001 12:36:03.816737    4721 logs.go:123] Gathering logs for kube-controller-manager [ecbe68f7a6b4] ...
	I1001 12:36:03.816749    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecbe68f7a6b4"
	I1001 12:36:03.834305    4721 logs.go:123] Gathering logs for coredns [d04375a2ee30] ...
	I1001 12:36:03.834320    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04375a2ee30"
	I1001 12:36:03.845085    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:36:03.845097    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:36:03.867749    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:36:03.867757    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:36:03.901505    4721 logs.go:123] Gathering logs for kube-apiserver [956404de281e] ...
	I1001 12:36:03.901516    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 956404de281e"
	I1001 12:36:03.915917    4721 logs.go:123] Gathering logs for kube-scheduler [7ad38fcc33d6] ...
	I1001 12:36:03.915927    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad38fcc33d6"
	I1001 12:36:03.930643    4721 logs.go:123] Gathering logs for kube-controller-manager [d9956cf09477] ...
	I1001 12:36:03.930656    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9956cf09477"
	I1001 12:36:03.942579    4721 logs.go:123] Gathering logs for storage-provisioner [5cc1ba08286c] ...
	I1001 12:36:03.942588    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc1ba08286c"
	I1001 12:36:06.456007    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:36:08.237209    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:36:08.237400    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:36:08.249595    4242 logs.go:276] 1 containers: [b4b0ba48f60b]
	I1001 12:36:08.249689    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:36:08.260471    4242 logs.go:276] 1 containers: [4fffcaa9e400]
	I1001 12:36:08.260563    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:36:08.271588    4242 logs.go:276] 4 containers: [f312b9c9ac08 1242378878f5 5e5e58a930ac c3764113e7e4]
	I1001 12:36:08.271677    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:36:08.286969    4242 logs.go:276] 1 containers: [3430a5479e9c]
	I1001 12:36:08.287058    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:36:08.298373    4242 logs.go:276] 1 containers: [ae0380eb6ceb]
	I1001 12:36:08.298460    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:36:08.309118    4242 logs.go:276] 1 containers: [38b93891ecd6]
	I1001 12:36:08.309205    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:36:08.320346    4242 logs.go:276] 0 containers: []
	W1001 12:36:08.320357    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:36:08.320425    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:36:08.331540    4242 logs.go:276] 1 containers: [97631f54aa43]
	I1001 12:36:08.331558    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:36:08.331564    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:36:08.366350    4242 logs.go:123] Gathering logs for kube-apiserver [b4b0ba48f60b] ...
	I1001 12:36:08.366362    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b0ba48f60b"
	I1001 12:36:08.381441    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:36:08.381455    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:36:08.405687    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:36:08.405696    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:36:08.416582    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:36:08.416593    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:36:08.421438    4242 logs.go:123] Gathering logs for coredns [5e5e58a930ac] ...
	I1001 12:36:08.421446    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5e58a930ac"
	I1001 12:36:08.433681    4242 logs.go:123] Gathering logs for coredns [c3764113e7e4] ...
	I1001 12:36:08.433694    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3764113e7e4"
	I1001 12:36:08.449933    4242 logs.go:123] Gathering logs for kube-scheduler [3430a5479e9c] ...
	I1001 12:36:08.449945    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3430a5479e9c"
	I1001 12:36:08.465497    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:36:08.465507    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:36:08.503900    4242 logs.go:123] Gathering logs for coredns [f312b9c9ac08] ...
	I1001 12:36:08.503919    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f312b9c9ac08"
	I1001 12:36:08.520157    4242 logs.go:123] Gathering logs for kube-proxy [ae0380eb6ceb] ...
	I1001 12:36:08.520171    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae0380eb6ceb"
	I1001 12:36:08.532224    4242 logs.go:123] Gathering logs for storage-provisioner [97631f54aa43] ...
	I1001 12:36:08.532241    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97631f54aa43"
	I1001 12:36:08.543954    4242 logs.go:123] Gathering logs for etcd [4fffcaa9e400] ...
	I1001 12:36:08.543965    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fffcaa9e400"
	I1001 12:36:08.557698    4242 logs.go:123] Gathering logs for coredns [1242378878f5] ...
	I1001 12:36:08.557714    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1242378878f5"
	I1001 12:36:08.568807    4242 logs.go:123] Gathering logs for kube-controller-manager [38b93891ecd6] ...
	I1001 12:36:08.568818    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38b93891ecd6"
	I1001 12:36:11.458166    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:36:11.458304    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:36:11.476852    4721 logs.go:276] 2 containers: [956404de281e bc78f59fb2e5]
	I1001 12:36:11.476945    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:36:11.488913    4721 logs.go:276] 2 containers: [4d0f920ec84f 316e5a1a5aed]
	I1001 12:36:11.489004    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:36:11.499115    4721 logs.go:276] 1 containers: [d04375a2ee30]
	I1001 12:36:11.499189    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:36:11.509887    4721 logs.go:276] 2 containers: [c952b19735c2 7ad38fcc33d6]
	I1001 12:36:11.509964    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:36:11.520664    4721 logs.go:276] 1 containers: [2cdb05dca894]
	I1001 12:36:11.520754    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:36:11.531821    4721 logs.go:276] 2 containers: [ecbe68f7a6b4 d9956cf09477]
	I1001 12:36:11.531910    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:36:11.542380    4721 logs.go:276] 0 containers: []
	W1001 12:36:11.542394    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:36:11.542461    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:36:11.552961    4721 logs.go:276] 1 containers: [5cc1ba08286c]
	I1001 12:36:11.552980    4721 logs.go:123] Gathering logs for kube-apiserver [956404de281e] ...
	I1001 12:36:11.552985    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 956404de281e"
	I1001 12:36:11.568432    4721 logs.go:123] Gathering logs for storage-provisioner [5cc1ba08286c] ...
	I1001 12:36:11.568446    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc1ba08286c"
	I1001 12:36:11.581602    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:36:11.581616    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:36:11.603853    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:36:11.603864    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:36:11.640296    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:36:11.640305    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:36:11.675092    4721 logs.go:123] Gathering logs for kube-apiserver [bc78f59fb2e5] ...
	I1001 12:36:11.675107    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc78f59fb2e5"
	I1001 12:36:11.713299    4721 logs.go:123] Gathering logs for etcd [4d0f920ec84f] ...
	I1001 12:36:11.713311    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d0f920ec84f"
	I1001 12:36:11.727588    4721 logs.go:123] Gathering logs for kube-scheduler [7ad38fcc33d6] ...
	I1001 12:36:11.727603    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad38fcc33d6"
	I1001 12:36:11.742651    4721 logs.go:123] Gathering logs for kube-controller-manager [ecbe68f7a6b4] ...
	I1001 12:36:11.742662    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecbe68f7a6b4"
	I1001 12:36:11.760419    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:36:11.760434    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:36:11.764601    4721 logs.go:123] Gathering logs for kube-proxy [2cdb05dca894] ...
	I1001 12:36:11.764612    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cdb05dca894"
	I1001 12:36:11.776252    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:36:11.776268    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:36:11.787431    4721 logs.go:123] Gathering logs for etcd [316e5a1a5aed] ...
	I1001 12:36:11.787442    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e5a1a5aed"
	I1001 12:36:11.806922    4721 logs.go:123] Gathering logs for kube-scheduler [c952b19735c2] ...
	I1001 12:36:11.806937    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c952b19735c2"
	I1001 12:36:11.822664    4721 logs.go:123] Gathering logs for kube-controller-manager [d9956cf09477] ...
	I1001 12:36:11.822677    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9956cf09477"
	I1001 12:36:11.834754    4721 logs.go:123] Gathering logs for coredns [d04375a2ee30] ...
	I1001 12:36:11.834766    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04375a2ee30"
	I1001 12:36:11.088829    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:36:14.348819    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:36:16.091651    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:36:16.092022    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:36:16.119341    4242 logs.go:276] 1 containers: [b4b0ba48f60b]
	I1001 12:36:16.119493    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:36:16.138691    4242 logs.go:276] 1 containers: [4fffcaa9e400]
	I1001 12:36:16.138807    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:36:16.153004    4242 logs.go:276] 4 containers: [f312b9c9ac08 1242378878f5 5e5e58a930ac c3764113e7e4]
	I1001 12:36:16.153099    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:36:16.169724    4242 logs.go:276] 1 containers: [3430a5479e9c]
	I1001 12:36:16.169811    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:36:16.180866    4242 logs.go:276] 1 containers: [ae0380eb6ceb]
	I1001 12:36:16.180950    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:36:16.194652    4242 logs.go:276] 1 containers: [38b93891ecd6]
	I1001 12:36:16.194740    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:36:16.204716    4242 logs.go:276] 0 containers: []
	W1001 12:36:16.204730    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:36:16.204803    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:36:16.215463    4242 logs.go:276] 1 containers: [97631f54aa43]
	I1001 12:36:16.215484    4242 logs.go:123] Gathering logs for coredns [f312b9c9ac08] ...
	I1001 12:36:16.215490    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f312b9c9ac08"
	I1001 12:36:16.226834    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:36:16.226846    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:36:16.239772    4242 logs.go:123] Gathering logs for coredns [1242378878f5] ...
	I1001 12:36:16.239786    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1242378878f5"
	I1001 12:36:16.251222    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:36:16.251237    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:36:16.285810    4242 logs.go:123] Gathering logs for coredns [c3764113e7e4] ...
	I1001 12:36:16.285818    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3764113e7e4"
	I1001 12:36:16.297431    4242 logs.go:123] Gathering logs for kube-proxy [ae0380eb6ceb] ...
	I1001 12:36:16.297443    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae0380eb6ceb"
	I1001 12:36:16.309597    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:36:16.309609    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:36:16.314301    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:36:16.314307    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:36:16.352957    4242 logs.go:123] Gathering logs for kube-apiserver [b4b0ba48f60b] ...
	I1001 12:36:16.352968    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b0ba48f60b"
	I1001 12:36:16.370066    4242 logs.go:123] Gathering logs for etcd [4fffcaa9e400] ...
	I1001 12:36:16.370078    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fffcaa9e400"
	I1001 12:36:16.391676    4242 logs.go:123] Gathering logs for coredns [5e5e58a930ac] ...
	I1001 12:36:16.391687    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5e58a930ac"
	I1001 12:36:16.403310    4242 logs.go:123] Gathering logs for kube-scheduler [3430a5479e9c] ...
	I1001 12:36:16.403324    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3430a5479e9c"
	I1001 12:36:16.418577    4242 logs.go:123] Gathering logs for kube-controller-manager [38b93891ecd6] ...
	I1001 12:36:16.418590    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38b93891ecd6"
	I1001 12:36:16.436311    4242 logs.go:123] Gathering logs for storage-provisioner [97631f54aa43] ...
	I1001 12:36:16.436322    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97631f54aa43"
	I1001 12:36:16.447843    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:36:16.447856    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:36:18.974808    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:36:19.351078    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:36:19.351276    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:36:19.367744    4721 logs.go:276] 2 containers: [956404de281e bc78f59fb2e5]
	I1001 12:36:19.367843    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:36:19.378845    4721 logs.go:276] 2 containers: [4d0f920ec84f 316e5a1a5aed]
	I1001 12:36:19.378939    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:36:19.389868    4721 logs.go:276] 1 containers: [d04375a2ee30]
	I1001 12:36:19.389957    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:36:19.401387    4721 logs.go:276] 2 containers: [c952b19735c2 7ad38fcc33d6]
	I1001 12:36:19.401466    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:36:19.411760    4721 logs.go:276] 1 containers: [2cdb05dca894]
	I1001 12:36:19.411832    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:36:19.422542    4721 logs.go:276] 2 containers: [ecbe68f7a6b4 d9956cf09477]
	I1001 12:36:19.422627    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:36:19.432929    4721 logs.go:276] 0 containers: []
	W1001 12:36:19.432942    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:36:19.433018    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:36:19.443657    4721 logs.go:276] 1 containers: [5cc1ba08286c]
	I1001 12:36:19.443675    4721 logs.go:123] Gathering logs for kube-scheduler [c952b19735c2] ...
	I1001 12:36:19.443680    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c952b19735c2"
	I1001 12:36:19.456076    4721 logs.go:123] Gathering logs for kube-proxy [2cdb05dca894] ...
	I1001 12:36:19.456087    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cdb05dca894"
	I1001 12:36:19.468194    4721 logs.go:123] Gathering logs for coredns [d04375a2ee30] ...
	I1001 12:36:19.468209    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04375a2ee30"
	I1001 12:36:19.479745    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:36:19.479757    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:36:19.491254    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:36:19.491268    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:36:19.495883    4721 logs.go:123] Gathering logs for kube-apiserver [956404de281e] ...
	I1001 12:36:19.495890    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 956404de281e"
	I1001 12:36:19.509653    4721 logs.go:123] Gathering logs for etcd [316e5a1a5aed] ...
	I1001 12:36:19.509666    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e5a1a5aed"
	I1001 12:36:19.523871    4721 logs.go:123] Gathering logs for kube-controller-manager [ecbe68f7a6b4] ...
	I1001 12:36:19.523888    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecbe68f7a6b4"
	I1001 12:36:19.544391    4721 logs.go:123] Gathering logs for storage-provisioner [5cc1ba08286c] ...
	I1001 12:36:19.544410    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc1ba08286c"
	I1001 12:36:19.556391    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:36:19.556402    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:36:19.580177    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:36:19.580184    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:36:19.617932    4721 logs.go:123] Gathering logs for kube-apiserver [bc78f59fb2e5] ...
	I1001 12:36:19.617939    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc78f59fb2e5"
	I1001 12:36:19.661432    4721 logs.go:123] Gathering logs for etcd [4d0f920ec84f] ...
	I1001 12:36:19.661449    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d0f920ec84f"
	I1001 12:36:19.675480    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:36:19.675496    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:36:19.712829    4721 logs.go:123] Gathering logs for kube-scheduler [7ad38fcc33d6] ...
	I1001 12:36:19.712840    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad38fcc33d6"
	I1001 12:36:19.727796    4721 logs.go:123] Gathering logs for kube-controller-manager [d9956cf09477] ...
	I1001 12:36:19.727810    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9956cf09477"
	I1001 12:36:23.977040    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:36:23.977290    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:36:24.001191    4242 logs.go:276] 1 containers: [b4b0ba48f60b]
	I1001 12:36:24.001310    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:36:24.015197    4242 logs.go:276] 1 containers: [4fffcaa9e400]
	I1001 12:36:24.015301    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:36:24.027004    4242 logs.go:276] 4 containers: [f312b9c9ac08 1242378878f5 5e5e58a930ac c3764113e7e4]
	I1001 12:36:24.027086    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:36:24.038541    4242 logs.go:276] 1 containers: [3430a5479e9c]
	I1001 12:36:24.038627    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:36:24.048940    4242 logs.go:276] 1 containers: [ae0380eb6ceb]
	I1001 12:36:24.049023    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:36:24.059777    4242 logs.go:276] 1 containers: [38b93891ecd6]
	I1001 12:36:24.059868    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:36:24.070728    4242 logs.go:276] 0 containers: []
	W1001 12:36:24.070740    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:36:24.070812    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:36:24.081440    4242 logs.go:276] 1 containers: [97631f54aa43]
	I1001 12:36:24.081457    4242 logs.go:123] Gathering logs for coredns [f312b9c9ac08] ...
	I1001 12:36:24.081463    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f312b9c9ac08"
	I1001 12:36:24.094175    4242 logs.go:123] Gathering logs for kube-controller-manager [38b93891ecd6] ...
	I1001 12:36:24.094186    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38b93891ecd6"
	I1001 12:36:24.113026    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:36:24.113039    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:36:24.124976    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:36:24.124988    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:36:24.162356    4242 logs.go:123] Gathering logs for kube-proxy [ae0380eb6ceb] ...
	I1001 12:36:24.162368    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae0380eb6ceb"
	I1001 12:36:24.174242    4242 logs.go:123] Gathering logs for etcd [4fffcaa9e400] ...
	I1001 12:36:24.174253    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fffcaa9e400"
	I1001 12:36:24.188450    4242 logs.go:123] Gathering logs for coredns [5e5e58a930ac] ...
	I1001 12:36:24.188461    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5e58a930ac"
	I1001 12:36:24.204969    4242 logs.go:123] Gathering logs for coredns [c3764113e7e4] ...
	I1001 12:36:24.204982    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3764113e7e4"
	I1001 12:36:24.218951    4242 logs.go:123] Gathering logs for kube-scheduler [3430a5479e9c] ...
	I1001 12:36:24.218963    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3430a5479e9c"
	I1001 12:36:24.238699    4242 logs.go:123] Gathering logs for storage-provisioner [97631f54aa43] ...
	I1001 12:36:24.238712    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97631f54aa43"
	I1001 12:36:24.254819    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:36:24.254831    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:36:24.259909    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:36:24.259915    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:36:24.301633    4242 logs.go:123] Gathering logs for kube-apiserver [b4b0ba48f60b] ...
	I1001 12:36:24.301647    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b0ba48f60b"
	I1001 12:36:24.317853    4242 logs.go:123] Gathering logs for coredns [1242378878f5] ...
	I1001 12:36:24.317864    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1242378878f5"
	I1001 12:36:24.330466    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:36:24.330480    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:36:22.241575    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:36:26.856362    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:36:27.243844    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:36:27.244201    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:36:27.271551    4721 logs.go:276] 2 containers: [956404de281e bc78f59fb2e5]
	I1001 12:36:27.271748    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:36:27.289581    4721 logs.go:276] 2 containers: [4d0f920ec84f 316e5a1a5aed]
	I1001 12:36:27.289706    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:36:27.306381    4721 logs.go:276] 1 containers: [d04375a2ee30]
	I1001 12:36:27.306475    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:36:27.321324    4721 logs.go:276] 2 containers: [c952b19735c2 7ad38fcc33d6]
	I1001 12:36:27.321407    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:36:27.331771    4721 logs.go:276] 1 containers: [2cdb05dca894]
	I1001 12:36:27.331852    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:36:27.342668    4721 logs.go:276] 2 containers: [ecbe68f7a6b4 d9956cf09477]
	I1001 12:36:27.342744    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:36:27.352886    4721 logs.go:276] 0 containers: []
	W1001 12:36:27.352900    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:36:27.352976    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:36:27.364942    4721 logs.go:276] 1 containers: [5cc1ba08286c]
	I1001 12:36:27.364963    4721 logs.go:123] Gathering logs for coredns [d04375a2ee30] ...
	I1001 12:36:27.364969    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04375a2ee30"
	I1001 12:36:27.376112    4721 logs.go:123] Gathering logs for kube-scheduler [c952b19735c2] ...
	I1001 12:36:27.376127    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c952b19735c2"
	I1001 12:36:27.387655    4721 logs.go:123] Gathering logs for kube-scheduler [7ad38fcc33d6] ...
	I1001 12:36:27.387666    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad38fcc33d6"
	I1001 12:36:27.404866    4721 logs.go:123] Gathering logs for kube-proxy [2cdb05dca894] ...
	I1001 12:36:27.404878    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cdb05dca894"
	I1001 12:36:27.417352    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:36:27.417365    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:36:27.429060    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:36:27.429075    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:36:27.463063    4721 logs.go:123] Gathering logs for kube-apiserver [956404de281e] ...
	I1001 12:36:27.463075    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 956404de281e"
	I1001 12:36:27.479112    4721 logs.go:123] Gathering logs for etcd [316e5a1a5aed] ...
	I1001 12:36:27.479123    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e5a1a5aed"
	I1001 12:36:27.493321    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:36:27.493333    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:36:27.514961    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:36:27.514969    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:36:27.519443    4721 logs.go:123] Gathering logs for kube-apiserver [bc78f59fb2e5] ...
	I1001 12:36:27.519450    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc78f59fb2e5"
	I1001 12:36:27.557860    4721 logs.go:123] Gathering logs for kube-controller-manager [d9956cf09477] ...
	I1001 12:36:27.557872    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9956cf09477"
	I1001 12:36:27.569938    4721 logs.go:123] Gathering logs for storage-provisioner [5cc1ba08286c] ...
	I1001 12:36:27.569950    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc1ba08286c"
	I1001 12:36:27.585115    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:36:27.585130    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:36:27.621527    4721 logs.go:123] Gathering logs for etcd [4d0f920ec84f] ...
	I1001 12:36:27.621538    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d0f920ec84f"
	I1001 12:36:27.635290    4721 logs.go:123] Gathering logs for kube-controller-manager [ecbe68f7a6b4] ...
	I1001 12:36:27.635300    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecbe68f7a6b4"
	I1001 12:36:30.154702    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:36:31.858645    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:36:31.858841    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:36:31.873130    4242 logs.go:276] 1 containers: [b4b0ba48f60b]
	I1001 12:36:31.873227    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:36:31.896927    4242 logs.go:276] 1 containers: [4fffcaa9e400]
	I1001 12:36:31.897017    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:36:31.908224    4242 logs.go:276] 4 containers: [f312b9c9ac08 1242378878f5 5e5e58a930ac c3764113e7e4]
	I1001 12:36:31.908321    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:36:31.918578    4242 logs.go:276] 1 containers: [3430a5479e9c]
	I1001 12:36:31.918664    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:36:31.928959    4242 logs.go:276] 1 containers: [ae0380eb6ceb]
	I1001 12:36:31.929039    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:36:31.939159    4242 logs.go:276] 1 containers: [38b93891ecd6]
	I1001 12:36:31.939243    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:36:31.949259    4242 logs.go:276] 0 containers: []
	W1001 12:36:31.949270    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:36:31.949342    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:36:31.959597    4242 logs.go:276] 1 containers: [97631f54aa43]
	I1001 12:36:31.959617    4242 logs.go:123] Gathering logs for storage-provisioner [97631f54aa43] ...
	I1001 12:36:31.959623    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97631f54aa43"
	I1001 12:36:31.970983    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:36:31.970998    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:36:31.975711    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:36:31.975719    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:36:32.011770    4242 logs.go:123] Gathering logs for kube-apiserver [b4b0ba48f60b] ...
	I1001 12:36:32.011784    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b0ba48f60b"
	I1001 12:36:32.026365    4242 logs.go:123] Gathering logs for coredns [c3764113e7e4] ...
	I1001 12:36:32.026377    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3764113e7e4"
	I1001 12:36:32.038002    4242 logs.go:123] Gathering logs for kube-controller-manager [38b93891ecd6] ...
	I1001 12:36:32.038016    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38b93891ecd6"
	I1001 12:36:32.055685    4242 logs.go:123] Gathering logs for coredns [f312b9c9ac08] ...
	I1001 12:36:32.055699    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f312b9c9ac08"
	I1001 12:36:32.071871    4242 logs.go:123] Gathering logs for kube-scheduler [3430a5479e9c] ...
	I1001 12:36:32.071887    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3430a5479e9c"
	I1001 12:36:32.094891    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:36:32.094902    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:36:32.106708    4242 logs.go:123] Gathering logs for etcd [4fffcaa9e400] ...
	I1001 12:36:32.106724    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fffcaa9e400"
	I1001 12:36:32.120912    4242 logs.go:123] Gathering logs for coredns [1242378878f5] ...
	I1001 12:36:32.120922    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1242378878f5"
	I1001 12:36:32.132833    4242 logs.go:123] Gathering logs for coredns [5e5e58a930ac] ...
	I1001 12:36:32.132849    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5e58a930ac"
	I1001 12:36:32.144780    4242 logs.go:123] Gathering logs for kube-proxy [ae0380eb6ceb] ...
	I1001 12:36:32.144796    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae0380eb6ceb"
	I1001 12:36:32.156975    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:36:32.156990    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:36:32.182243    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:36:32.182252    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:36:35.156953    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:36:35.157318    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:36:35.190759    4721 logs.go:276] 2 containers: [956404de281e bc78f59fb2e5]
	I1001 12:36:35.190915    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:36:35.210160    4721 logs.go:276] 2 containers: [4d0f920ec84f 316e5a1a5aed]
	I1001 12:36:35.210275    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:36:35.225427    4721 logs.go:276] 1 containers: [d04375a2ee30]
	I1001 12:36:35.225522    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:36:35.237543    4721 logs.go:276] 2 containers: [c952b19735c2 7ad38fcc33d6]
	I1001 12:36:35.237637    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:36:35.249206    4721 logs.go:276] 1 containers: [2cdb05dca894]
	I1001 12:36:35.249289    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:36:35.259860    4721 logs.go:276] 2 containers: [ecbe68f7a6b4 d9956cf09477]
	I1001 12:36:35.259944    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:36:35.270826    4721 logs.go:276] 0 containers: []
	W1001 12:36:35.270838    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:36:35.270908    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:36:35.281464    4721 logs.go:276] 1 containers: [5cc1ba08286c]
	I1001 12:36:35.281482    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:36:35.281487    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:36:35.317935    4721 logs.go:123] Gathering logs for etcd [4d0f920ec84f] ...
	I1001 12:36:35.317944    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d0f920ec84f"
	I1001 12:36:35.331606    4721 logs.go:123] Gathering logs for storage-provisioner [5cc1ba08286c] ...
	I1001 12:36:35.331615    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc1ba08286c"
	I1001 12:36:35.343187    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:36:35.343201    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:36:35.355568    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:36:35.355580    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:36:35.359774    4721 logs.go:123] Gathering logs for kube-controller-manager [ecbe68f7a6b4] ...
	I1001 12:36:35.359781    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecbe68f7a6b4"
	I1001 12:36:35.378017    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:36:35.378028    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:36:35.401407    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:36:35.401421    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:36:35.444588    4721 logs.go:123] Gathering logs for etcd [316e5a1a5aed] ...
	I1001 12:36:35.444605    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e5a1a5aed"
	I1001 12:36:35.477839    4721 logs.go:123] Gathering logs for kube-scheduler [7ad38fcc33d6] ...
	I1001 12:36:35.477856    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad38fcc33d6"
	I1001 12:36:35.492389    4721 logs.go:123] Gathering logs for kube-proxy [2cdb05dca894] ...
	I1001 12:36:35.492401    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cdb05dca894"
	I1001 12:36:35.504060    4721 logs.go:123] Gathering logs for kube-apiserver [956404de281e] ...
	I1001 12:36:35.504071    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 956404de281e"
	I1001 12:36:35.517617    4721 logs.go:123] Gathering logs for kube-apiserver [bc78f59fb2e5] ...
	I1001 12:36:35.517630    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc78f59fb2e5"
	I1001 12:36:35.554629    4721 logs.go:123] Gathering logs for coredns [d04375a2ee30] ...
	I1001 12:36:35.554640    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04375a2ee30"
	I1001 12:36:35.570068    4721 logs.go:123] Gathering logs for kube-scheduler [c952b19735c2] ...
	I1001 12:36:35.570080    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c952b19735c2"
	I1001 12:36:35.581721    4721 logs.go:123] Gathering logs for kube-controller-manager [d9956cf09477] ...
	I1001 12:36:35.581732    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9956cf09477"
	I1001 12:36:34.721093    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:36:38.096722    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:36:39.723372    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:36:39.723612    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:36:39.742555    4242 logs.go:276] 1 containers: [b4b0ba48f60b]
	I1001 12:36:39.742652    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:36:39.758067    4242 logs.go:276] 1 containers: [4fffcaa9e400]
	I1001 12:36:39.758143    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:36:39.773992    4242 logs.go:276] 4 containers: [f312b9c9ac08 1242378878f5 5e5e58a930ac c3764113e7e4]
	I1001 12:36:39.774087    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:36:39.784355    4242 logs.go:276] 1 containers: [3430a5479e9c]
	I1001 12:36:39.784441    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:36:39.795752    4242 logs.go:276] 1 containers: [ae0380eb6ceb]
	I1001 12:36:39.795829    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:36:39.806768    4242 logs.go:276] 1 containers: [38b93891ecd6]
	I1001 12:36:39.806854    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:36:39.817440    4242 logs.go:276] 0 containers: []
	W1001 12:36:39.817450    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:36:39.817516    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:36:39.827817    4242 logs.go:276] 1 containers: [97631f54aa43]
	I1001 12:36:39.827836    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:36:39.827841    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:36:39.840013    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:36:39.840023    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:36:39.863982    4242 logs.go:123] Gathering logs for coredns [5e5e58a930ac] ...
	I1001 12:36:39.863991    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5e58a930ac"
	I1001 12:36:39.875692    4242 logs.go:123] Gathering logs for coredns [c3764113e7e4] ...
	I1001 12:36:39.875705    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3764113e7e4"
	I1001 12:36:39.887485    4242 logs.go:123] Gathering logs for kube-proxy [ae0380eb6ceb] ...
	I1001 12:36:39.887495    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae0380eb6ceb"
	I1001 12:36:39.898950    4242 logs.go:123] Gathering logs for storage-provisioner [97631f54aa43] ...
	I1001 12:36:39.898964    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97631f54aa43"
	I1001 12:36:39.910179    4242 logs.go:123] Gathering logs for coredns [1242378878f5] ...
	I1001 12:36:39.910191    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1242378878f5"
	I1001 12:36:39.921669    4242 logs.go:123] Gathering logs for kube-scheduler [3430a5479e9c] ...
	I1001 12:36:39.921684    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3430a5479e9c"
	I1001 12:36:39.946834    4242 logs.go:123] Gathering logs for kube-controller-manager [38b93891ecd6] ...
	I1001 12:36:39.946846    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38b93891ecd6"
	I1001 12:36:39.971663    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:36:39.971674    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:36:40.008650    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:36:40.008659    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:36:40.013093    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:36:40.013100    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:36:40.047130    4242 logs.go:123] Gathering logs for etcd [4fffcaa9e400] ...
	I1001 12:36:40.047144    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fffcaa9e400"
	I1001 12:36:40.061586    4242 logs.go:123] Gathering logs for kube-apiserver [b4b0ba48f60b] ...
	I1001 12:36:40.061603    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b0ba48f60b"
	I1001 12:36:40.077409    4242 logs.go:123] Gathering logs for coredns [f312b9c9ac08] ...
	I1001 12:36:40.077420    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f312b9c9ac08"
	I1001 12:36:42.598164    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:36:43.099391    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:36:43.099948    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:36:43.143598    4721 logs.go:276] 2 containers: [956404de281e bc78f59fb2e5]
	I1001 12:36:43.143778    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:36:43.163798    4721 logs.go:276] 2 containers: [4d0f920ec84f 316e5a1a5aed]
	I1001 12:36:43.163917    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:36:43.180662    4721 logs.go:276] 1 containers: [d04375a2ee30]
	I1001 12:36:43.180757    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:36:43.193202    4721 logs.go:276] 2 containers: [c952b19735c2 7ad38fcc33d6]
	I1001 12:36:43.193301    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:36:43.208166    4721 logs.go:276] 1 containers: [2cdb05dca894]
	I1001 12:36:43.208249    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:36:43.220060    4721 logs.go:276] 2 containers: [ecbe68f7a6b4 d9956cf09477]
	I1001 12:36:43.220147    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:36:43.232124    4721 logs.go:276] 0 containers: []
	W1001 12:36:43.232136    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:36:43.232215    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:36:43.242734    4721 logs.go:276] 1 containers: [5cc1ba08286c]
	I1001 12:36:43.242751    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:36:43.242758    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:36:43.279822    4721 logs.go:123] Gathering logs for kube-scheduler [c952b19735c2] ...
	I1001 12:36:43.279835    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c952b19735c2"
	I1001 12:36:43.292094    4721 logs.go:123] Gathering logs for kube-controller-manager [ecbe68f7a6b4] ...
	I1001 12:36:43.292108    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecbe68f7a6b4"
	I1001 12:36:43.309792    4721 logs.go:123] Gathering logs for storage-provisioner [5cc1ba08286c] ...
	I1001 12:36:43.309805    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc1ba08286c"
	I1001 12:36:43.321350    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:36:43.321362    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:36:43.346482    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:36:43.346492    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:36:43.350654    4721 logs.go:123] Gathering logs for etcd [4d0f920ec84f] ...
	I1001 12:36:43.350662    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d0f920ec84f"
	I1001 12:36:43.364816    4721 logs.go:123] Gathering logs for coredns [d04375a2ee30] ...
	I1001 12:36:43.364830    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04375a2ee30"
	I1001 12:36:43.397706    4721 logs.go:123] Gathering logs for kube-scheduler [7ad38fcc33d6] ...
	I1001 12:36:43.397718    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad38fcc33d6"
	I1001 12:36:43.413523    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:36:43.413537    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:36:43.452120    4721 logs.go:123] Gathering logs for kube-controller-manager [d9956cf09477] ...
	I1001 12:36:43.452130    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9956cf09477"
	I1001 12:36:43.464602    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:36:43.464615    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:36:43.478645    4721 logs.go:123] Gathering logs for kube-apiserver [956404de281e] ...
	I1001 12:36:43.478661    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 956404de281e"
	I1001 12:36:43.492847    4721 logs.go:123] Gathering logs for kube-apiserver [bc78f59fb2e5] ...
	I1001 12:36:43.492856    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc78f59fb2e5"
	I1001 12:36:43.530449    4721 logs.go:123] Gathering logs for etcd [316e5a1a5aed] ...
	I1001 12:36:43.530471    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e5a1a5aed"
	I1001 12:36:43.549104    4721 logs.go:123] Gathering logs for kube-proxy [2cdb05dca894] ...
	I1001 12:36:43.549114    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cdb05dca894"
	I1001 12:36:46.062490    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:36:47.600529    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:36:47.600712    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:36:47.614462    4242 logs.go:276] 1 containers: [b4b0ba48f60b]
	I1001 12:36:47.614560    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:36:47.625766    4242 logs.go:276] 1 containers: [4fffcaa9e400]
	I1001 12:36:47.625853    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:36:47.636611    4242 logs.go:276] 4 containers: [f312b9c9ac08 1242378878f5 5e5e58a930ac c3764113e7e4]
	I1001 12:36:47.636689    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:36:47.647402    4242 logs.go:276] 1 containers: [3430a5479e9c]
	I1001 12:36:47.647485    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:36:47.658363    4242 logs.go:276] 1 containers: [ae0380eb6ceb]
	I1001 12:36:47.658451    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:36:47.672905    4242 logs.go:276] 1 containers: [38b93891ecd6]
	I1001 12:36:47.672981    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:36:47.682736    4242 logs.go:276] 0 containers: []
	W1001 12:36:47.682747    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:36:47.682822    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:36:47.696908    4242 logs.go:276] 1 containers: [97631f54aa43]
	I1001 12:36:47.696924    4242 logs.go:123] Gathering logs for etcd [4fffcaa9e400] ...
	I1001 12:36:47.696929    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fffcaa9e400"
	I1001 12:36:47.711019    4242 logs.go:123] Gathering logs for kube-proxy [ae0380eb6ceb] ...
	I1001 12:36:47.711034    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae0380eb6ceb"
	I1001 12:36:47.722818    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:36:47.722835    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:36:47.758831    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:36:47.758841    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:36:47.781823    4242 logs.go:123] Gathering logs for kube-controller-manager [38b93891ecd6] ...
	I1001 12:36:47.781831    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38b93891ecd6"
	I1001 12:36:47.799347    4242 logs.go:123] Gathering logs for storage-provisioner [97631f54aa43] ...
	I1001 12:36:47.799359    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97631f54aa43"
	I1001 12:36:47.811678    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:36:47.811692    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:36:47.816692    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:36:47.816701    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:36:47.852490    4242 logs.go:123] Gathering logs for coredns [f312b9c9ac08] ...
	I1001 12:36:47.852501    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f312b9c9ac08"
	I1001 12:36:47.864742    4242 logs.go:123] Gathering logs for coredns [1242378878f5] ...
	I1001 12:36:47.864760    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1242378878f5"
	I1001 12:36:47.876205    4242 logs.go:123] Gathering logs for coredns [5e5e58a930ac] ...
	I1001 12:36:47.876217    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5e58a930ac"
	I1001 12:36:47.887868    4242 logs.go:123] Gathering logs for kube-scheduler [3430a5479e9c] ...
	I1001 12:36:47.887880    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3430a5479e9c"
	I1001 12:36:47.903402    4242 logs.go:123] Gathering logs for kube-apiserver [b4b0ba48f60b] ...
	I1001 12:36:47.903418    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b0ba48f60b"
	I1001 12:36:47.917854    4242 logs.go:123] Gathering logs for coredns [c3764113e7e4] ...
	I1001 12:36:47.917864    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3764113e7e4"
	I1001 12:36:47.929836    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:36:47.929847    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:36:51.064828    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:36:51.065001    4721 kubeadm.go:597] duration metric: took 4m3.886574291s to restartPrimaryControlPlane
	W1001 12:36:51.065114    4721 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1001 12:36:51.065169    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1001 12:36:52.117877    4721 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.052720833s)
	I1001 12:36:52.117949    4721 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 12:36:52.123053    4721 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 12:36:52.126155    4721 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 12:36:52.129592    4721 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 12:36:52.129601    4721 kubeadm.go:157] found existing configuration files:
	
	I1001 12:36:52.129627    4721 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/admin.conf
	I1001 12:36:52.132923    4721 kubeadm.go:163] "https://control-plane.minikube.internal:50511" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 12:36:52.132951    4721 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 12:36:52.135873    4721 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/kubelet.conf
	I1001 12:36:52.138524    4721 kubeadm.go:163] "https://control-plane.minikube.internal:50511" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 12:36:52.138552    4721 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 12:36:52.141903    4721 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/controller-manager.conf
	I1001 12:36:52.145117    4721 kubeadm.go:163] "https://control-plane.minikube.internal:50511" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 12:36:52.145149    4721 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 12:36:52.148088    4721 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/scheduler.conf
	I1001 12:36:52.150655    4721 kubeadm.go:163] "https://control-plane.minikube.internal:50511" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 12:36:52.150679    4721 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 12:36:52.153979    4721 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1001 12:36:52.170950    4721 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1001 12:36:52.170991    4721 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 12:36:52.221830    4721 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 12:36:52.221887    4721 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 12:36:52.222015    4721 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1001 12:36:52.273385    4721 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 12:36:52.277676    4721 out.go:235]   - Generating certificates and keys ...
	I1001 12:36:52.277712    4721 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 12:36:52.277751    4721 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 12:36:52.277797    4721 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1001 12:36:52.277830    4721 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1001 12:36:52.277875    4721 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1001 12:36:52.277903    4721 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1001 12:36:52.277944    4721 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1001 12:36:52.277979    4721 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1001 12:36:52.278021    4721 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1001 12:36:52.278072    4721 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1001 12:36:52.278091    4721 kubeadm.go:310] [certs] Using the existing "sa" key
	I1001 12:36:52.278123    4721 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 12:36:52.446212    4721 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 12:36:52.505021    4721 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 12:36:52.636464    4721 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 12:36:52.683470    4721 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 12:36:52.713766    4721 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 12:36:52.714201    4721 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 12:36:52.714308    4721 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 12:36:52.810731    4721 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 12:36:50.444847    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:36:52.817872    4721 out.go:235]   - Booting up control plane ...
	I1001 12:36:52.817924    4721 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 12:36:52.817963    4721 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 12:36:52.818024    4721 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 12:36:52.818061    4721 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 12:36:52.818153    4721 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1001 12:36:57.316134    4721 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501314 seconds
	I1001 12:36:57.316201    4721 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1001 12:36:57.320322    4721 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1001 12:36:57.841517    4721 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1001 12:36:57.841762    4721 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-340000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1001 12:36:58.346273    4721 kubeadm.go:310] [bootstrap-token] Using token: 55wevq.3qkjkejxbsnf8vog
	I1001 12:36:58.348793    4721 out.go:235]   - Configuring RBAC rules ...
	I1001 12:36:58.348849    4721 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1001 12:36:58.348899    4721 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1001 12:36:58.355312    4721 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1001 12:36:58.356199    4721 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1001 12:36:58.357114    4721 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1001 12:36:58.358031    4721 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1001 12:36:58.362339    4721 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1001 12:36:58.544001    4721 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1001 12:36:58.751979    4721 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1001 12:36:58.752457    4721 kubeadm.go:310] 
	I1001 12:36:58.752498    4721 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1001 12:36:58.752509    4721 kubeadm.go:310] 
	I1001 12:36:58.752555    4721 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1001 12:36:58.752563    4721 kubeadm.go:310] 
	I1001 12:36:58.752583    4721 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1001 12:36:58.752618    4721 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1001 12:36:58.752648    4721 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1001 12:36:58.752652    4721 kubeadm.go:310] 
	I1001 12:36:58.752693    4721 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1001 12:36:58.752699    4721 kubeadm.go:310] 
	I1001 12:36:58.752729    4721 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1001 12:36:58.752733    4721 kubeadm.go:310] 
	I1001 12:36:58.752770    4721 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1001 12:36:58.752810    4721 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1001 12:36:58.752854    4721 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1001 12:36:58.752858    4721 kubeadm.go:310] 
	I1001 12:36:58.752909    4721 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1001 12:36:58.752961    4721 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1001 12:36:58.752966    4721 kubeadm.go:310] 
	I1001 12:36:58.753010    4721 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 55wevq.3qkjkejxbsnf8vog \
	I1001 12:36:58.753075    4721 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1bec8634fed302f64212571ed3ed0831b844a21f4f42ed3778332e10a4ff7e9e \
	I1001 12:36:58.753087    4721 kubeadm.go:310] 	--control-plane 
	I1001 12:36:58.753092    4721 kubeadm.go:310] 
	I1001 12:36:58.753137    4721 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1001 12:36:58.753140    4721 kubeadm.go:310] 
	I1001 12:36:58.753201    4721 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 55wevq.3qkjkejxbsnf8vog \
	I1001 12:36:58.753250    4721 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1bec8634fed302f64212571ed3ed0831b844a21f4f42ed3778332e10a4ff7e9e 
	I1001 12:36:58.753385    4721 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1001 12:36:58.753394    4721 cni.go:84] Creating CNI manager for ""
	I1001 12:36:58.753401    4721 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 12:36:58.757097    4721 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1001 12:36:58.764035    4721 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1001 12:36:58.767083    4721 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1001 12:36:58.774447    4721 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1001 12:36:58.774519    4721 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-340000 minikube.k8s.io/updated_at=2024_10_01T12_36_58_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4 minikube.k8s.io/name=stopped-upgrade-340000 minikube.k8s.io/primary=true
	I1001 12:36:58.774520    4721 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 12:36:58.809772    4721 kubeadm.go:1113] duration metric: took 35.309458ms to wait for elevateKubeSystemPrivileges
	I1001 12:36:58.815288    4721 ops.go:34] apiserver oom_adj: -16
	I1001 12:36:58.815297    4721 kubeadm.go:394] duration metric: took 4m11.653538458s to StartCluster
	I1001 12:36:58.815307    4721 settings.go:142] acquiring lock: {Name:mk456a8b96b1746a679d3a85129b9d4d9b38bdfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 12:36:58.815398    4721 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19736-1073/kubeconfig
	I1001 12:36:58.815806    4721 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19736-1073/kubeconfig: {Name:mkdfe60702c76fe804796a27b08676f2ebb5427f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 12:36:58.816036    4721 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 12:36:58.816077    4721 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1001 12:36:58.816113    4721 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-340000"
	I1001 12:36:58.816121    4721 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-340000"
	I1001 12:36:58.816121    4721 config.go:182] Loaded profile config "stopped-upgrade-340000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	W1001 12:36:58.816124    4721 addons.go:243] addon storage-provisioner should already be in state true
	I1001 12:36:58.816144    4721 host.go:66] Checking if "stopped-upgrade-340000" exists ...
	I1001 12:36:58.816179    4721 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-340000"
	I1001 12:36:58.816190    4721 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-340000"
	I1001 12:36:58.817070    4721 kapi.go:59] client config for stopped-upgrade-340000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/stopped-upgrade-340000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/stopped-upgrade-340000/client.key", CAFile:"/Users/jenkins/minikube-integration/19736-1073/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103e525d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1001 12:36:58.817189    4721 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-340000"
	W1001 12:36:58.817193    4721 addons.go:243] addon default-storageclass should already be in state true
	I1001 12:36:58.817201    4721 host.go:66] Checking if "stopped-upgrade-340000" exists ...
	I1001 12:36:58.820016    4721 out.go:177] * Verifying Kubernetes components...
	I1001 12:36:58.820321    4721 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1001 12:36:58.824189    4721 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1001 12:36:58.824196    4721 sshutil.go:53] new ssh client: &{IP:localhost Port:50476 SSHKeyPath:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/stopped-upgrade-340000/id_rsa Username:docker}
	I1001 12:36:58.827954    4721 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 12:36:55.446945    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:36:55.447086    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:36:55.458994    4242 logs.go:276] 1 containers: [b4b0ba48f60b]
	I1001 12:36:55.459079    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:36:55.469900    4242 logs.go:276] 1 containers: [4fffcaa9e400]
	I1001 12:36:55.469987    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:36:55.485161    4242 logs.go:276] 4 containers: [f312b9c9ac08 1242378878f5 5e5e58a930ac c3764113e7e4]
	I1001 12:36:55.485251    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:36:55.495965    4242 logs.go:276] 1 containers: [3430a5479e9c]
	I1001 12:36:55.496047    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:36:55.506809    4242 logs.go:276] 1 containers: [ae0380eb6ceb]
	I1001 12:36:55.506899    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:36:55.517484    4242 logs.go:276] 1 containers: [38b93891ecd6]
	I1001 12:36:55.517572    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:36:55.530595    4242 logs.go:276] 0 containers: []
	W1001 12:36:55.530607    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:36:55.530679    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:36:55.541709    4242 logs.go:276] 1 containers: [97631f54aa43]
	I1001 12:36:55.541727    4242 logs.go:123] Gathering logs for kube-scheduler [3430a5479e9c] ...
	I1001 12:36:55.541733    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3430a5479e9c"
	I1001 12:36:55.558558    4242 logs.go:123] Gathering logs for kube-proxy [ae0380eb6ceb] ...
	I1001 12:36:55.558573    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae0380eb6ceb"
	I1001 12:36:55.576040    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:36:55.576053    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:36:55.602451    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:36:55.602462    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:36:55.607671    4242 logs.go:123] Gathering logs for kube-apiserver [b4b0ba48f60b] ...
	I1001 12:36:55.607678    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b0ba48f60b"
	I1001 12:36:55.622429    4242 logs.go:123] Gathering logs for etcd [4fffcaa9e400] ...
	I1001 12:36:55.622442    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fffcaa9e400"
	I1001 12:36:55.636942    4242 logs.go:123] Gathering logs for coredns [c3764113e7e4] ...
	I1001 12:36:55.636955    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3764113e7e4"
	I1001 12:36:55.649279    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:36:55.649290    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:36:55.685939    4242 logs.go:123] Gathering logs for storage-provisioner [97631f54aa43] ...
	I1001 12:36:55.685952    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97631f54aa43"
	I1001 12:36:55.698060    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:36:55.698073    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:36:55.711071    4242 logs.go:123] Gathering logs for coredns [f312b9c9ac08] ...
	I1001 12:36:55.711083    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f312b9c9ac08"
	I1001 12:36:55.723511    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:36:55.723523    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:36:55.759511    4242 logs.go:123] Gathering logs for coredns [1242378878f5] ...
	I1001 12:36:55.759523    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1242378878f5"
	I1001 12:36:55.777338    4242 logs.go:123] Gathering logs for coredns [5e5e58a930ac] ...
	I1001 12:36:55.777353    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5e58a930ac"
	I1001 12:36:55.789349    4242 logs.go:123] Gathering logs for kube-controller-manager [38b93891ecd6] ...
	I1001 12:36:55.789360    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38b93891ecd6"
	I1001 12:36:58.309556    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:36:58.832050    4721 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 12:36:58.836096    4721 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 12:36:58.836104    4721 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1001 12:36:58.836110    4721 sshutil.go:53] new ssh client: &{IP:localhost Port:50476 SSHKeyPath:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/stopped-upgrade-340000/id_rsa Username:docker}
	I1001 12:36:58.908983    4721 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 12:36:58.914444    4721 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1001 12:36:58.917780    4721 api_server.go:52] waiting for apiserver process to appear ...
	I1001 12:36:58.917830    4721 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 12:36:58.947920    4721 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 12:36:59.266026    4721 api_server.go:72] duration metric: took 449.98825ms to wait for apiserver process to appear ...
	I1001 12:36:59.266040    4721 api_server.go:88] waiting for apiserver healthz status ...
	I1001 12:36:59.266052    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:36:59.266469    4721 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1001 12:36:59.266478    4721 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1001 12:37:03.311670    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:37:03.311802    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:37:03.324548    4242 logs.go:276] 1 containers: [b4b0ba48f60b]
	I1001 12:37:03.324630    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:37:03.335888    4242 logs.go:276] 1 containers: [4fffcaa9e400]
	I1001 12:37:03.335963    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:37:03.346579    4242 logs.go:276] 4 containers: [f312b9c9ac08 1242378878f5 5e5e58a930ac c3764113e7e4]
	I1001 12:37:03.346667    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:37:03.357413    4242 logs.go:276] 1 containers: [3430a5479e9c]
	I1001 12:37:03.357497    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:37:03.368073    4242 logs.go:276] 1 containers: [ae0380eb6ceb]
	I1001 12:37:03.368153    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:37:03.381143    4242 logs.go:276] 1 containers: [38b93891ecd6]
	I1001 12:37:03.381233    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:37:03.391302    4242 logs.go:276] 0 containers: []
	W1001 12:37:03.391317    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:37:03.391393    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:37:03.401604    4242 logs.go:276] 1 containers: [97631f54aa43]
	I1001 12:37:03.401620    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:37:03.401626    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:37:03.436757    4242 logs.go:123] Gathering logs for coredns [5e5e58a930ac] ...
	I1001 12:37:03.436770    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5e58a930ac"
	I1001 12:37:03.448171    4242 logs.go:123] Gathering logs for kube-proxy [ae0380eb6ceb] ...
	I1001 12:37:03.448186    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae0380eb6ceb"
	I1001 12:37:03.460076    4242 logs.go:123] Gathering logs for storage-provisioner [97631f54aa43] ...
	I1001 12:37:03.460087    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97631f54aa43"
	I1001 12:37:03.471932    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:37:03.471945    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:37:03.496957    4242 logs.go:123] Gathering logs for kube-scheduler [3430a5479e9c] ...
	I1001 12:37:03.496966    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3430a5479e9c"
	I1001 12:37:03.512509    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:37:03.512519    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:37:03.548732    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:37:03.548743    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:37:03.553164    4242 logs.go:123] Gathering logs for kube-apiserver [b4b0ba48f60b] ...
	I1001 12:37:03.553170    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b0ba48f60b"
	I1001 12:37:03.566921    4242 logs.go:123] Gathering logs for coredns [1242378878f5] ...
	I1001 12:37:03.566932    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1242378878f5"
	I1001 12:37:03.579139    4242 logs.go:123] Gathering logs for coredns [c3764113e7e4] ...
	I1001 12:37:03.579150    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3764113e7e4"
	I1001 12:37:03.594417    4242 logs.go:123] Gathering logs for kube-controller-manager [38b93891ecd6] ...
	I1001 12:37:03.594428    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38b93891ecd6"
	I1001 12:37:03.611609    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:37:03.611620    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:37:03.623851    4242 logs.go:123] Gathering logs for etcd [4fffcaa9e400] ...
	I1001 12:37:03.623862    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fffcaa9e400"
	I1001 12:37:03.638444    4242 logs.go:123] Gathering logs for coredns [f312b9c9ac08] ...
	I1001 12:37:03.638460    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f312b9c9ac08"
	I1001 12:37:04.268027    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:37:04.268084    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:37:06.151842    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:37:09.268308    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:37:09.268341    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:37:11.154058    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:37:11.154241    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:37:11.166857    4242 logs.go:276] 1 containers: [b4b0ba48f60b]
	I1001 12:37:11.166946    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:37:11.178222    4242 logs.go:276] 1 containers: [4fffcaa9e400]
	I1001 12:37:11.178306    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:37:11.189208    4242 logs.go:276] 4 containers: [f312b9c9ac08 1242378878f5 5e5e58a930ac c3764113e7e4]
	I1001 12:37:11.189294    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:37:11.199899    4242 logs.go:276] 1 containers: [3430a5479e9c]
	I1001 12:37:11.199984    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:37:11.210541    4242 logs.go:276] 1 containers: [ae0380eb6ceb]
	I1001 12:37:11.210620    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:37:11.221957    4242 logs.go:276] 1 containers: [38b93891ecd6]
	I1001 12:37:11.222041    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:37:11.231928    4242 logs.go:276] 0 containers: []
	W1001 12:37:11.231939    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:37:11.232017    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:37:11.242491    4242 logs.go:276] 1 containers: [97631f54aa43]
	I1001 12:37:11.242510    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:37:11.242515    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:37:11.247458    4242 logs.go:123] Gathering logs for kube-scheduler [3430a5479e9c] ...
	I1001 12:37:11.247466    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3430a5479e9c"
	I1001 12:37:11.263260    4242 logs.go:123] Gathering logs for kube-controller-manager [38b93891ecd6] ...
	I1001 12:37:11.263278    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38b93891ecd6"
	I1001 12:37:11.281690    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:37:11.281702    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:37:11.308657    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:37:11.308672    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:37:11.346313    4242 logs.go:123] Gathering logs for coredns [f312b9c9ac08] ...
	I1001 12:37:11.346326    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f312b9c9ac08"
	I1001 12:37:11.358079    4242 logs.go:123] Gathering logs for coredns [1242378878f5] ...
	I1001 12:37:11.358096    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1242378878f5"
	I1001 12:37:11.370965    4242 logs.go:123] Gathering logs for kube-apiserver [b4b0ba48f60b] ...
	I1001 12:37:11.370978    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b0ba48f60b"
	I1001 12:37:11.386227    4242 logs.go:123] Gathering logs for coredns [c3764113e7e4] ...
	I1001 12:37:11.386245    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3764113e7e4"
	I1001 12:37:11.397879    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:37:11.397890    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:37:11.409191    4242 logs.go:123] Gathering logs for storage-provisioner [97631f54aa43] ...
	I1001 12:37:11.409205    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97631f54aa43"
	I1001 12:37:11.421302    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:37:11.421313    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:37:11.456111    4242 logs.go:123] Gathering logs for etcd [4fffcaa9e400] ...
	I1001 12:37:11.456119    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fffcaa9e400"
	I1001 12:37:11.470741    4242 logs.go:123] Gathering logs for coredns [5e5e58a930ac] ...
	I1001 12:37:11.470753    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5e58a930ac"
	I1001 12:37:11.486007    4242 logs.go:123] Gathering logs for kube-proxy [ae0380eb6ceb] ...
	I1001 12:37:11.486019    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae0380eb6ceb"
	I1001 12:37:13.999777    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:37:14.268527    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:37:14.268576    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:37:19.002048    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:37:19.002191    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:37:19.014063    4242 logs.go:276] 1 containers: [b4b0ba48f60b]
	I1001 12:37:19.014140    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:37:19.024390    4242 logs.go:276] 1 containers: [4fffcaa9e400]
	I1001 12:37:19.024475    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:37:19.034655    4242 logs.go:276] 4 containers: [f312b9c9ac08 1242378878f5 5e5e58a930ac c3764113e7e4]
	I1001 12:37:19.034746    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:37:19.045243    4242 logs.go:276] 1 containers: [3430a5479e9c]
	I1001 12:37:19.045327    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:37:19.055862    4242 logs.go:276] 1 containers: [ae0380eb6ceb]
	I1001 12:37:19.055941    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:37:19.075511    4242 logs.go:276] 1 containers: [38b93891ecd6]
	I1001 12:37:19.075586    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:37:19.086031    4242 logs.go:276] 0 containers: []
	W1001 12:37:19.086043    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:37:19.086114    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:37:19.096551    4242 logs.go:276] 1 containers: [97631f54aa43]
	I1001 12:37:19.096568    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:37:19.096574    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:37:19.101143    4242 logs.go:123] Gathering logs for coredns [1242378878f5] ...
	I1001 12:37:19.101150    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1242378878f5"
	I1001 12:37:19.112170    4242 logs.go:123] Gathering logs for coredns [c3764113e7e4] ...
	I1001 12:37:19.112185    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3764113e7e4"
	I1001 12:37:19.124019    4242 logs.go:123] Gathering logs for kube-scheduler [3430a5479e9c] ...
	I1001 12:37:19.124028    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3430a5479e9c"
	I1001 12:37:19.141256    4242 logs.go:123] Gathering logs for kube-proxy [ae0380eb6ceb] ...
	I1001 12:37:19.141267    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae0380eb6ceb"
	I1001 12:37:19.152630    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:37:19.152643    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:37:19.176159    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:37:19.176169    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:37:19.211416    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:37:19.211425    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:37:19.247303    4242 logs.go:123] Gathering logs for etcd [4fffcaa9e400] ...
	I1001 12:37:19.247316    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fffcaa9e400"
	I1001 12:37:19.261487    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:37:19.261500    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:37:19.274164    4242 logs.go:123] Gathering logs for kube-apiserver [b4b0ba48f60b] ...
	I1001 12:37:19.274172    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b0ba48f60b"
	I1001 12:37:19.288330    4242 logs.go:123] Gathering logs for coredns [f312b9c9ac08] ...
	I1001 12:37:19.288347    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f312b9c9ac08"
	I1001 12:37:19.299685    4242 logs.go:123] Gathering logs for coredns [5e5e58a930ac] ...
	I1001 12:37:19.299696    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5e58a930ac"
	I1001 12:37:19.311261    4242 logs.go:123] Gathering logs for storage-provisioner [97631f54aa43] ...
	I1001 12:37:19.311270    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97631f54aa43"
	I1001 12:37:19.322945    4242 logs.go:123] Gathering logs for kube-controller-manager [38b93891ecd6] ...
	I1001 12:37:19.322957    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38b93891ecd6"
	I1001 12:37:19.268919    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:37:19.268939    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:37:21.842433    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:37:24.269385    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:37:24.269438    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1001 12:37:29.267470    4721 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1001 12:37:29.270037    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:37:29.270056    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:37:29.270729    4721 out.go:177] * Enabled addons: storage-provisioner
	I1001 12:37:26.844601    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:37:26.844825    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:37:26.860678    4242 logs.go:276] 1 containers: [b4b0ba48f60b]
	I1001 12:37:26.860781    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:37:26.872320    4242 logs.go:276] 1 containers: [4fffcaa9e400]
	I1001 12:37:26.872410    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:37:26.883239    4242 logs.go:276] 4 containers: [f312b9c9ac08 1242378878f5 5e5e58a930ac c3764113e7e4]
	I1001 12:37:26.883331    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:37:26.893934    4242 logs.go:276] 1 containers: [3430a5479e9c]
	I1001 12:37:26.894020    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:37:26.904507    4242 logs.go:276] 1 containers: [ae0380eb6ceb]
	I1001 12:37:26.904587    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:37:26.915524    4242 logs.go:276] 1 containers: [38b93891ecd6]
	I1001 12:37:26.915610    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:37:26.927314    4242 logs.go:276] 0 containers: []
	W1001 12:37:26.927329    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:37:26.927404    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:37:26.938147    4242 logs.go:276] 1 containers: [97631f54aa43]
	I1001 12:37:26.938171    4242 logs.go:123] Gathering logs for kube-controller-manager [38b93891ecd6] ...
	I1001 12:37:26.938177    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38b93891ecd6"
	I1001 12:37:26.955573    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:37:26.955584    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:37:26.968909    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:37:26.968921    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:37:27.006960    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:37:27.006975    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:37:27.045407    4242 logs.go:123] Gathering logs for etcd [4fffcaa9e400] ...
	I1001 12:37:27.045418    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fffcaa9e400"
	I1001 12:37:27.059784    4242 logs.go:123] Gathering logs for coredns [c3764113e7e4] ...
	I1001 12:37:27.059796    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3764113e7e4"
	I1001 12:37:27.071458    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:37:27.071470    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:37:27.076014    4242 logs.go:123] Gathering logs for coredns [1242378878f5] ...
	I1001 12:37:27.076020    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1242378878f5"
	I1001 12:37:27.087871    4242 logs.go:123] Gathering logs for kube-scheduler [3430a5479e9c] ...
	I1001 12:37:27.087887    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3430a5479e9c"
	I1001 12:37:27.103340    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:37:27.103351    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:37:27.125752    4242 logs.go:123] Gathering logs for kube-apiserver [b4b0ba48f60b] ...
	I1001 12:37:27.125760    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b0ba48f60b"
	I1001 12:37:27.140152    4242 logs.go:123] Gathering logs for coredns [5e5e58a930ac] ...
	I1001 12:37:27.140168    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5e58a930ac"
	I1001 12:37:27.151935    4242 logs.go:123] Gathering logs for kube-proxy [ae0380eb6ceb] ...
	I1001 12:37:27.151947    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae0380eb6ceb"
	I1001 12:37:27.164063    4242 logs.go:123] Gathering logs for storage-provisioner [97631f54aa43] ...
	I1001 12:37:27.164080    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97631f54aa43"
	I1001 12:37:27.177568    4242 logs.go:123] Gathering logs for coredns [f312b9c9ac08] ...
	I1001 12:37:27.177580    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f312b9c9ac08"
	I1001 12:37:29.692069    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:37:29.276589    4721 addons.go:510] duration metric: took 30.461293709s for enable addons: enabled=[storage-provisioner]
	I1001 12:37:34.694223    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:37:34.694452    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:37:34.271039    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:37:34.271114    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:37:34.712199    4242 logs.go:276] 1 containers: [b4b0ba48f60b]
	I1001 12:37:34.712301    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:37:34.726196    4242 logs.go:276] 1 containers: [4fffcaa9e400]
	I1001 12:37:34.726289    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:37:34.737910    4242 logs.go:276] 4 containers: [f312b9c9ac08 1242378878f5 5e5e58a930ac c3764113e7e4]
	I1001 12:37:34.737988    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:37:34.749715    4242 logs.go:276] 1 containers: [3430a5479e9c]
	I1001 12:37:34.749792    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:37:34.760398    4242 logs.go:276] 1 containers: [ae0380eb6ceb]
	I1001 12:37:34.760481    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:37:34.771557    4242 logs.go:276] 1 containers: [38b93891ecd6]
	I1001 12:37:34.771647    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:37:34.784813    4242 logs.go:276] 0 containers: []
	W1001 12:37:34.784828    4242 logs.go:278] No container was found matching "kindnet"
	I1001 12:37:34.784910    4242 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:37:34.800757    4242 logs.go:276] 1 containers: [97631f54aa43]
	I1001 12:37:34.800775    4242 logs.go:123] Gathering logs for coredns [f312b9c9ac08] ...
	I1001 12:37:34.800782    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f312b9c9ac08"
	I1001 12:37:34.812456    4242 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:37:34.812471    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:37:34.846874    4242 logs.go:123] Gathering logs for kube-apiserver [b4b0ba48f60b] ...
	I1001 12:37:34.846890    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4b0ba48f60b"
	I1001 12:37:34.862592    4242 logs.go:123] Gathering logs for kubelet ...
	I1001 12:37:34.862602    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:37:34.898774    4242 logs.go:123] Gathering logs for coredns [1242378878f5] ...
	I1001 12:37:34.898782    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1242378878f5"
	I1001 12:37:34.910766    4242 logs.go:123] Gathering logs for coredns [5e5e58a930ac] ...
	I1001 12:37:34.910777    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e5e58a930ac"
	I1001 12:37:34.925480    4242 logs.go:123] Gathering logs for coredns [c3764113e7e4] ...
	I1001 12:37:34.925495    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c3764113e7e4"
	I1001 12:37:34.937197    4242 logs.go:123] Gathering logs for kube-controller-manager [38b93891ecd6] ...
	I1001 12:37:34.937208    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38b93891ecd6"
	I1001 12:37:34.954424    4242 logs.go:123] Gathering logs for Docker ...
	I1001 12:37:34.954437    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:37:34.977706    4242 logs.go:123] Gathering logs for dmesg ...
	I1001 12:37:34.977714    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:37:34.981710    4242 logs.go:123] Gathering logs for kube-scheduler [3430a5479e9c] ...
	I1001 12:37:34.981719    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3430a5479e9c"
	I1001 12:37:34.996882    4242 logs.go:123] Gathering logs for kube-proxy [ae0380eb6ceb] ...
	I1001 12:37:34.996894    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae0380eb6ceb"
	I1001 12:37:35.010211    4242 logs.go:123] Gathering logs for storage-provisioner [97631f54aa43] ...
	I1001 12:37:35.010222    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 97631f54aa43"
	I1001 12:37:35.021664    4242 logs.go:123] Gathering logs for container status ...
	I1001 12:37:35.021673    4242 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:37:35.033714    4242 logs.go:123] Gathering logs for etcd [4fffcaa9e400] ...
	I1001 12:37:35.033726    4242 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fffcaa9e400"
	I1001 12:37:37.549695    4242 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:37:39.272345    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:37:39.272389    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:37:42.551909    4242 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:37:42.555637    4242 out.go:201] 
	W1001 12:37:42.559317    4242 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1001 12:37:42.559333    4242 out.go:270] * 
	W1001 12:37:42.560204    4242 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 12:37:42.571369    4242 out.go:201] 
	I1001 12:37:44.273916    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:37:44.273964    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:37:49.275978    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:37:49.276021    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:37:54.278276    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:37:54.278366    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Tue 2024-10-01 19:28:46 UTC, ends at Tue 2024-10-01 19:37:58 UTC. --
	Oct 01 19:37:43 running-upgrade-810000 dockerd[2894]: time="2024-10-01T19:37:43.307861656Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/d826141e0fc436cb7f228ba09e52ca32f19c37003652ea0eef840eedd7e52ea9 pid=18560 runtime=io.containerd.runc.v2
	Oct 01 19:37:43 running-upgrade-810000 dockerd[2894]: time="2024-10-01T19:37:43.308952072Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 01 19:37:43 running-upgrade-810000 dockerd[2894]: time="2024-10-01T19:37:43.308969029Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 01 19:37:43 running-upgrade-810000 dockerd[2894]: time="2024-10-01T19:37:43.308974862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 01 19:37:43 running-upgrade-810000 dockerd[2894]: time="2024-10-01T19:37:43.309041857Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/6d08848f34f3c8796e22c1ac1d07845ab6f5f7cbe189c1b074a04afe3a3934a9 pid=18575 runtime=io.containerd.runc.v2
	Oct 01 19:37:43 running-upgrade-810000 cri-dockerd[2735]: time="2024-10-01T19:37:43Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Oct 01 19:37:44 running-upgrade-810000 cri-dockerd[2735]: time="2024-10-01T19:37:44Z" level=error msg="ContainerStats resp: {0x400076d800 linux}"
	Oct 01 19:37:45 running-upgrade-810000 cri-dockerd[2735]: time="2024-10-01T19:37:45Z" level=error msg="ContainerStats resp: {0x40008254c0 linux}"
	Oct 01 19:37:45 running-upgrade-810000 cri-dockerd[2735]: time="2024-10-01T19:37:45Z" level=error msg="ContainerStats resp: {0x4000909bc0 linux}"
	Oct 01 19:37:45 running-upgrade-810000 cri-dockerd[2735]: time="2024-10-01T19:37:45Z" level=error msg="ContainerStats resp: {0x4000909d00 linux}"
	Oct 01 19:37:45 running-upgrade-810000 cri-dockerd[2735]: time="2024-10-01T19:37:45Z" level=error msg="ContainerStats resp: {0x4000825c00 linux}"
	Oct 01 19:37:45 running-upgrade-810000 cri-dockerd[2735]: time="2024-10-01T19:37:45Z" level=error msg="ContainerStats resp: {0x40007a48c0 linux}"
	Oct 01 19:37:48 running-upgrade-810000 cri-dockerd[2735]: time="2024-10-01T19:37:48Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Oct 01 19:37:53 running-upgrade-810000 cri-dockerd[2735]: time="2024-10-01T19:37:53Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Oct 01 19:37:55 running-upgrade-810000 cri-dockerd[2735]: time="2024-10-01T19:37:55Z" level=error msg="ContainerStats resp: {0x40003b0fc0 linux}"
	Oct 01 19:37:55 running-upgrade-810000 cri-dockerd[2735]: time="2024-10-01T19:37:55Z" level=error msg="ContainerStats resp: {0x40003b1c40 linux}"
	Oct 01 19:37:56 running-upgrade-810000 cri-dockerd[2735]: time="2024-10-01T19:37:56Z" level=error msg="ContainerStats resp: {0x40007a51c0 linux}"
	Oct 01 19:37:57 running-upgrade-810000 cri-dockerd[2735]: time="2024-10-01T19:37:57Z" level=error msg="ContainerStats resp: {0x40007a5d80 linux}"
	Oct 01 19:37:57 running-upgrade-810000 cri-dockerd[2735]: time="2024-10-01T19:37:57Z" level=error msg="ContainerStats resp: {0x40007b2180 linux}"
	Oct 01 19:37:57 running-upgrade-810000 cri-dockerd[2735]: time="2024-10-01T19:37:57Z" level=error msg="ContainerStats resp: {0x40004f2680 linux}"
	Oct 01 19:37:57 running-upgrade-810000 cri-dockerd[2735]: time="2024-10-01T19:37:57Z" level=error msg="ContainerStats resp: {0x40004f2d80 linux}"
	Oct 01 19:37:57 running-upgrade-810000 cri-dockerd[2735]: time="2024-10-01T19:37:57Z" level=error msg="ContainerStats resp: {0x40007b2e80 linux}"
	Oct 01 19:37:57 running-upgrade-810000 cri-dockerd[2735]: time="2024-10-01T19:37:57Z" level=error msg="ContainerStats resp: {0x40007b21c0 linux}"
	Oct 01 19:37:57 running-upgrade-810000 cri-dockerd[2735]: time="2024-10-01T19:37:57Z" level=error msg="ContainerStats resp: {0x40007b2780 linux}"
	Oct 01 19:37:58 running-upgrade-810000 cri-dockerd[2735]: time="2024-10-01T19:37:58Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	d826141e0fc43       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   9264f3a6ddbbf
	6d08848f34f3c       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   6f2e63fa8a057
	f312b9c9ac08f       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   9264f3a6ddbbf
	1242378878f50       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   6f2e63fa8a057
	97631f54aa430       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   7246a79bbf101
	ae0380eb6ceb2       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   c123b01ec1f22
	3430a5479e9c3       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   d6ab8e6ed42a2
	38b93891ecd6f       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   a3980bf2837d0
	4fffcaa9e400b       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   e8abc0b9f92c2
	b4b0ba48f60bd       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   4088208b051e1
	
	
	==> coredns [1242378878f5] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 6124356395922702978.4137539812049815115. HINFO: read udp 10.244.0.3:57934->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6124356395922702978.4137539812049815115. HINFO: read udp 10.244.0.3:51542->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6124356395922702978.4137539812049815115. HINFO: read udp 10.244.0.3:37505->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6124356395922702978.4137539812049815115. HINFO: read udp 10.244.0.3:53240->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6124356395922702978.4137539812049815115. HINFO: read udp 10.244.0.3:41615->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6124356395922702978.4137539812049815115. HINFO: read udp 10.244.0.3:50958->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6124356395922702978.4137539812049815115. HINFO: read udp 10.244.0.3:50906->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6124356395922702978.4137539812049815115. HINFO: read udp 10.244.0.3:52391->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6124356395922702978.4137539812049815115. HINFO: read udp 10.244.0.3:45691->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6124356395922702978.4137539812049815115. HINFO: read udp 10.244.0.3:36629->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [6d08848f34f3] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 878893037932690181.4707987987850453534. HINFO: read udp 10.244.0.3:36157->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 878893037932690181.4707987987850453534. HINFO: read udp 10.244.0.3:35271->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 878893037932690181.4707987987850453534. HINFO: read udp 10.244.0.3:41291->10.0.2.3:53: i/o timeout
	
	
	==> coredns [d826141e0fc4] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7393951273581112383.3870271347344276447. HINFO: read udp 10.244.0.2:50120->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7393951273581112383.3870271347344276447. HINFO: read udp 10.244.0.2:33439->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7393951273581112383.3870271347344276447. HINFO: read udp 10.244.0.2:50935->10.0.2.3:53: i/o timeout
	
	
	==> coredns [f312b9c9ac08] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7820449328625862659.2538264417370038504. HINFO: read udp 10.244.0.2:33683->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7820449328625862659.2538264417370038504. HINFO: read udp 10.244.0.2:38820->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7820449328625862659.2538264417370038504. HINFO: read udp 10.244.0.2:42671->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7820449328625862659.2538264417370038504. HINFO: read udp 10.244.0.2:37598->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7820449328625862659.2538264417370038504. HINFO: read udp 10.244.0.2:42450->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7820449328625862659.2538264417370038504. HINFO: read udp 10.244.0.2:45116->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7820449328625862659.2538264417370038504. HINFO: read udp 10.244.0.2:44092->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7820449328625862659.2538264417370038504. HINFO: read udp 10.244.0.2:56146->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7820449328625862659.2538264417370038504. HINFO: read udp 10.244.0.2:40067->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7820449328625862659.2538264417370038504. HINFO: read udp 10.244.0.2:51077->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-810000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-810000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4
	                    minikube.k8s.io/name=running-upgrade-810000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_01T12_33_41_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 19:33:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-810000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 19:37:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 19:33:41 +0000   Tue, 01 Oct 2024 19:33:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 19:33:41 +0000   Tue, 01 Oct 2024 19:33:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 19:33:41 +0000   Tue, 01 Oct 2024 19:33:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 19:33:41 +0000   Tue, 01 Oct 2024 19:33:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-810000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 67ea2b7fd53e4f80a25a3211fd65108c
	  System UUID:                67ea2b7fd53e4f80a25a3211fd65108c
	  Boot ID:                    77c30a79-29a4-4ba6-8d2e-afa75fefd7cd
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-7bv6c                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m3s
	  kube-system                 coredns-6d4b75cb6d-btglt                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m3s
	  kube-system                 etcd-running-upgrade-810000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m17s
	  kube-system                 kube-apiserver-running-upgrade-810000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-controller-manager-running-upgrade-810000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-proxy-p9849                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-810000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m16s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m3s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m22s (x4 over 4m22s)  kubelet          Node running-upgrade-810000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m22s (x4 over 4m22s)  kubelet          Node running-upgrade-810000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m22s (x4 over 4m22s)  kubelet          Node running-upgrade-810000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeAllocatableEnforced  4m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s                  kubelet          Node running-upgrade-810000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s                  kubelet          Node running-upgrade-810000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s                  kubelet          Node running-upgrade-810000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m17s                  kubelet          Node running-upgrade-810000 status is now: NodeReady
	  Normal  Starting                 4m17s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m4s                   node-controller  Node running-upgrade-810000 event: Registered Node running-upgrade-810000 in Controller
	
	
	==> dmesg <==
	[  +1.668091] systemd-fstab-generator[874]: Ignoring "noauto" for root device
	[  +0.063724] systemd-fstab-generator[885]: Ignoring "noauto" for root device
	[  +0.063687] systemd-fstab-generator[896]: Ignoring "noauto" for root device
	[  +1.142007] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.080513] systemd-fstab-generator[1046]: Ignoring "noauto" for root device
	[  +0.083921] systemd-fstab-generator[1057]: Ignoring "noauto" for root device
	[Oct 1 19:29] systemd-fstab-generator[1286]: Ignoring "noauto" for root device
	[ +15.144861] systemd-fstab-generator[1947]: Ignoring "noauto" for root device
	[  +2.453593] systemd-fstab-generator[2218]: Ignoring "noauto" for root device
	[  +0.171048] systemd-fstab-generator[2256]: Ignoring "noauto" for root device
	[  +0.077116] systemd-fstab-generator[2267]: Ignoring "noauto" for root device
	[  +0.085974] systemd-fstab-generator[2280]: Ignoring "noauto" for root device
	[  +1.690206] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.137825] systemd-fstab-generator[2692]: Ignoring "noauto" for root device
	[  +0.065246] systemd-fstab-generator[2703]: Ignoring "noauto" for root device
	[  +0.064155] systemd-fstab-generator[2714]: Ignoring "noauto" for root device
	[  +0.076321] systemd-fstab-generator[2728]: Ignoring "noauto" for root device
	[  +2.300703] systemd-fstab-generator[2880]: Ignoring "noauto" for root device
	[  +5.090198] systemd-fstab-generator[3285]: Ignoring "noauto" for root device
	[  +1.087878] systemd-fstab-generator[3414]: Ignoring "noauto" for root device
	[ +19.411503] kauditd_printk_skb: 68 callbacks suppressed
	[Oct 1 19:33] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.305625] systemd-fstab-generator[11637]: Ignoring "noauto" for root device
	[  +5.124064] systemd-fstab-generator[12239]: Ignoring "noauto" for root device
	[  +0.455957] systemd-fstab-generator[12371]: Ignoring "noauto" for root device
	
	
	==> etcd [4fffcaa9e400] <==
	{"level":"info","ts":"2024-10-01T19:33:37.340Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-01T19:33:37.340Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-01T19:33:37.340Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-10-01T19:33:37.341Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-10-01T19:33:37.341Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-10-01T19:33:37.341Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-10-01T19:33:37.341Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-10-01T19:33:37.737Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-10-01T19:33:37.737Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-10-01T19:33:37.737Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-10-01T19:33:37.737Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-10-01T19:33:37.737Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-10-01T19:33:37.737Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-10-01T19:33:37.737Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-10-01T19:33:37.738Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-810000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-01T19:33:37.738Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-01T19:33:37.738Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-10-01T19:33:37.738Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-01T19:33:37.739Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-01T19:33:37.739Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-01T19:33:37.739Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-01T19:33:37.739Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-01T19:33:37.750Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-01T19:33:37.750Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-01T19:33:37.752Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 19:37:58 up 9 min,  0 users,  load average: 0.09, 0.22, 0.14
	Linux running-upgrade-810000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [b4b0ba48f60b] <==
	I1001 19:33:39.184754       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1001 19:33:39.184802       1 cache.go:39] Caches are synced for autoregister controller
	I1001 19:33:39.184921       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1001 19:33:39.184953       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I1001 19:33:39.184968       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1001 19:33:39.188887       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I1001 19:33:39.195480       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1001 19:33:39.917452       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1001 19:33:40.102594       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1001 19:33:40.113190       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1001 19:33:40.113351       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1001 19:33:40.236636       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1001 19:33:40.250277       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1001 19:33:40.344592       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W1001 19:33:40.346790       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I1001 19:33:40.347173       1 controller.go:611] quota admission added evaluator for: endpoints
	I1001 19:33:40.348390       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1001 19:33:41.216926       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1001 19:33:41.496257       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1001 19:33:41.499307       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I1001 19:33:41.516570       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1001 19:33:41.564985       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1001 19:33:54.306007       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I1001 19:33:54.553535       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I1001 19:33:55.376250       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [38b93891ecd6] <==
	I1001 19:33:54.075583       1 shared_informer.go:262] Caches are synced for ephemeral
	I1001 19:33:54.077747       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I1001 19:33:54.079914       1 shared_informer.go:262] Caches are synced for TTL
	I1001 19:33:54.085068       1 shared_informer.go:262] Caches are synced for expand
	I1001 19:33:54.101134       1 shared_informer.go:262] Caches are synced for HPA
	I1001 19:33:54.102171       1 shared_informer.go:262] Caches are synced for stateful set
	I1001 19:33:54.102190       1 shared_informer.go:262] Caches are synced for disruption
	I1001 19:33:54.102200       1 disruption.go:371] Sending events to api server.
	I1001 19:33:54.102218       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I1001 19:33:54.103313       1 shared_informer.go:262] Caches are synced for service account
	I1001 19:33:54.103348       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I1001 19:33:54.103411       1 shared_informer.go:262] Caches are synced for crt configmap
	I1001 19:33:54.202181       1 shared_informer.go:262] Caches are synced for persistent volume
	I1001 19:33:54.218301       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I1001 19:33:54.232075       1 shared_informer.go:262] Caches are synced for endpoint
	I1001 19:33:54.252864       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I1001 19:33:54.305041       1 shared_informer.go:262] Caches are synced for resource quota
	I1001 19:33:54.309298       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-p9849"
	I1001 19:33:54.341259       1 shared_informer.go:262] Caches are synced for resource quota
	I1001 19:33:54.554939       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I1001 19:33:54.725846       1 shared_informer.go:262] Caches are synced for garbage collector
	I1001 19:33:54.803037       1 shared_informer.go:262] Caches are synced for garbage collector
	I1001 19:33:54.803116       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1001 19:33:55.104760       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-7bv6c"
	I1001 19:33:55.109653       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-btglt"
	
	
	==> kube-proxy [ae0380eb6ceb] <==
	I1001 19:33:55.365238       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I1001 19:33:55.365264       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I1001 19:33:55.365273       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1001 19:33:55.373876       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I1001 19:33:55.373888       1 server_others.go:206] "Using iptables Proxier"
	I1001 19:33:55.373900       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1001 19:33:55.373989       1 server.go:661] "Version info" version="v1.24.1"
	I1001 19:33:55.373996       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 19:33:55.374410       1 config.go:226] "Starting endpoint slice config controller"
	I1001 19:33:55.374414       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1001 19:33:55.374708       1 config.go:317] "Starting service config controller"
	I1001 19:33:55.374711       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1001 19:33:55.374947       1 config.go:444] "Starting node config controller"
	I1001 19:33:55.374950       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1001 19:33:55.475428       1 shared_informer.go:262] Caches are synced for node config
	I1001 19:33:55.475428       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1001 19:33:55.475468       1 shared_informer.go:262] Caches are synced for service config
	
	
	==> kube-scheduler [3430a5479e9c] <==
	W1001 19:33:39.136640       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1001 19:33:39.136715       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1001 19:33:39.136653       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1001 19:33:39.136947       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1001 19:33:39.136694       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1001 19:33:39.136973       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1001 19:33:39.136707       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1001 19:33:39.136989       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1001 19:33:40.034925       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1001 19:33:40.035376       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1001 19:33:40.035274       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1001 19:33:40.035431       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1001 19:33:40.063416       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1001 19:33:40.063554       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1001 19:33:40.085836       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1001 19:33:40.086079       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1001 19:33:40.153835       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1001 19:33:40.153924       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1001 19:33:40.175509       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1001 19:33:40.175525       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1001 19:33:40.186075       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1001 19:33:40.186142       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1001 19:33:40.197877       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1001 19:33:40.197891       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I1001 19:33:40.734230       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-10-01 19:28:46 UTC, ends at Tue 2024-10-01 19:37:58 UTC. --
	Oct 01 19:33:54 running-upgrade-810000 kubelet[12245]: I1001 19:33:54.059681   12245 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz7rc\" (UniqueName: \"kubernetes.io/projected/ddb9e511-fb2b-4c4c-93ae-7d31a97a4e19-kube-api-access-lz7rc\") pod \"storage-provisioner\" (UID: \"ddb9e511-fb2b-4c4c-93ae-7d31a97a4e19\") " pod="kube-system/storage-provisioner"
	Oct 01 19:33:54 running-upgrade-810000 kubelet[12245]: I1001 19:33:54.160656   12245 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 01 19:33:54 running-upgrade-810000 kubelet[12245]: I1001 19:33:54.161038   12245 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 01 19:33:54 running-upgrade-810000 kubelet[12245]: E1001 19:33:54.164224   12245 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 01 19:33:54 running-upgrade-810000 kubelet[12245]: E1001 19:33:54.164249   12245 projected.go:192] Error preparing data for projected volume kube-api-access-lz7rc for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Oct 01 19:33:54 running-upgrade-810000 kubelet[12245]: E1001 19:33:54.164308   12245 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/ddb9e511-fb2b-4c4c-93ae-7d31a97a4e19-kube-api-access-lz7rc podName:ddb9e511-fb2b-4c4c-93ae-7d31a97a4e19 nodeName:}" failed. No retries permitted until 2024-10-01 19:33:54.664293668 +0000 UTC m=+13.179967418 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lz7rc" (UniqueName: "kubernetes.io/projected/ddb9e511-fb2b-4c4c-93ae-7d31a97a4e19-kube-api-access-lz7rc") pod "storage-provisioner" (UID: "ddb9e511-fb2b-4c4c-93ae-7d31a97a4e19") : configmap "kube-root-ca.crt" not found
	Oct 01 19:33:54 running-upgrade-810000 kubelet[12245]: I1001 19:33:54.310384   12245 topology_manager.go:200] "Topology Admit Handler"
	Oct 01 19:33:54 running-upgrade-810000 kubelet[12245]: I1001 19:33:54.464182   12245 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzr7j\" (UniqueName: \"kubernetes.io/projected/51c9966f-f95c-4511-805e-dcd86ae935fc-kube-api-access-lzr7j\") pod \"kube-proxy-p9849\" (UID: \"51c9966f-f95c-4511-805e-dcd86ae935fc\") " pod="kube-system/kube-proxy-p9849"
	Oct 01 19:33:54 running-upgrade-810000 kubelet[12245]: I1001 19:33:54.464251   12245 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/51c9966f-f95c-4511-805e-dcd86ae935fc-lib-modules\") pod \"kube-proxy-p9849\" (UID: \"51c9966f-f95c-4511-805e-dcd86ae935fc\") " pod="kube-system/kube-proxy-p9849"
	Oct 01 19:33:54 running-upgrade-810000 kubelet[12245]: I1001 19:33:54.464263   12245 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/51c9966f-f95c-4511-805e-dcd86ae935fc-kube-proxy\") pod \"kube-proxy-p9849\" (UID: \"51c9966f-f95c-4511-805e-dcd86ae935fc\") " pod="kube-system/kube-proxy-p9849"
	Oct 01 19:33:54 running-upgrade-810000 kubelet[12245]: I1001 19:33:54.464273   12245 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/51c9966f-f95c-4511-805e-dcd86ae935fc-xtables-lock\") pod \"kube-proxy-p9849\" (UID: \"51c9966f-f95c-4511-805e-dcd86ae935fc\") " pod="kube-system/kube-proxy-p9849"
	Oct 01 19:33:54 running-upgrade-810000 kubelet[12245]: E1001 19:33:54.567415   12245 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 01 19:33:54 running-upgrade-810000 kubelet[12245]: E1001 19:33:54.567433   12245 projected.go:192] Error preparing data for projected volume kube-api-access-lzr7j for pod kube-system/kube-proxy-p9849: configmap "kube-root-ca.crt" not found
	Oct 01 19:33:54 running-upgrade-810000 kubelet[12245]: E1001 19:33:54.567469   12245 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/51c9966f-f95c-4511-805e-dcd86ae935fc-kube-api-access-lzr7j podName:51c9966f-f95c-4511-805e-dcd86ae935fc nodeName:}" failed. No retries permitted until 2024-10-01 19:33:55.067460543 +0000 UTC m=+13.583134293 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lzr7j" (UniqueName: "kubernetes.io/projected/51c9966f-f95c-4511-805e-dcd86ae935fc-kube-api-access-lzr7j") pod "kube-proxy-p9849" (UID: "51c9966f-f95c-4511-805e-dcd86ae935fc") : configmap "kube-root-ca.crt" not found
	Oct 01 19:33:54 running-upgrade-810000 kubelet[12245]: E1001 19:33:54.666155   12245 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 01 19:33:54 running-upgrade-810000 kubelet[12245]: E1001 19:33:54.666173   12245 projected.go:192] Error preparing data for projected volume kube-api-access-lz7rc for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Oct 01 19:33:54 running-upgrade-810000 kubelet[12245]: E1001 19:33:54.666200   12245 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/ddb9e511-fb2b-4c4c-93ae-7d31a97a4e19-kube-api-access-lz7rc podName:ddb9e511-fb2b-4c4c-93ae-7d31a97a4e19 nodeName:}" failed. No retries permitted until 2024-10-01 19:33:55.666191223 +0000 UTC m=+14.181864973 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-lz7rc" (UniqueName: "kubernetes.io/projected/ddb9e511-fb2b-4c4c-93ae-7d31a97a4e19-kube-api-access-lz7rc") pod "storage-provisioner" (UID: "ddb9e511-fb2b-4c4c-93ae-7d31a97a4e19") : configmap "kube-root-ca.crt" not found
	Oct 01 19:33:55 running-upgrade-810000 kubelet[12245]: I1001 19:33:55.109169   12245 topology_manager.go:200] "Topology Admit Handler"
	Oct 01 19:33:55 running-upgrade-810000 kubelet[12245]: I1001 19:33:55.115099   12245 topology_manager.go:200] "Topology Admit Handler"
	Oct 01 19:33:55 running-upgrade-810000 kubelet[12245]: I1001 19:33:55.271528   12245 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7b86ee3c-8c3b-4de7-8967-554034948e5e-config-volume\") pod \"coredns-6d4b75cb6d-7bv6c\" (UID: \"7b86ee3c-8c3b-4de7-8967-554034948e5e\") " pod="kube-system/coredns-6d4b75cb6d-7bv6c"
	Oct 01 19:33:55 running-upgrade-810000 kubelet[12245]: I1001 19:33:55.271554   12245 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69e2f1b2-be40-4b1e-a5b1-b1c7f4e2cd2c-config-volume\") pod \"coredns-6d4b75cb6d-btglt\" (UID: \"69e2f1b2-be40-4b1e-a5b1-b1c7f4e2cd2c\") " pod="kube-system/coredns-6d4b75cb6d-btglt"
	Oct 01 19:33:55 running-upgrade-810000 kubelet[12245]: I1001 19:33:55.271572   12245 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sclx7\" (UniqueName: \"kubernetes.io/projected/7b86ee3c-8c3b-4de7-8967-554034948e5e-kube-api-access-sclx7\") pod \"coredns-6d4b75cb6d-7bv6c\" (UID: \"7b86ee3c-8c3b-4de7-8967-554034948e5e\") " pod="kube-system/coredns-6d4b75cb6d-7bv6c"
	Oct 01 19:33:55 running-upgrade-810000 kubelet[12245]: I1001 19:33:55.271584   12245 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lklms\" (UniqueName: \"kubernetes.io/projected/69e2f1b2-be40-4b1e-a5b1-b1c7f4e2cd2c-kube-api-access-lklms\") pod \"coredns-6d4b75cb6d-btglt\" (UID: \"69e2f1b2-be40-4b1e-a5b1-b1c7f4e2cd2c\") " pod="kube-system/coredns-6d4b75cb6d-btglt"
	Oct 01 19:37:43 running-upgrade-810000 kubelet[12245]: I1001 19:37:43.848080   12245 scope.go:110] "RemoveContainer" containerID="c3764113e7e45b4778a993a448e6f7d95c6a52c06dc38d591117f30aa0b36ecb"
	Oct 01 19:37:43 running-upgrade-810000 kubelet[12245]: I1001 19:37:43.867187   12245 scope.go:110] "RemoveContainer" containerID="5e5e58a930acf50c6b7713c1fe6d8ebcf91522794da426c3752e4829d30b88ed"
	
	
	==> storage-provisioner [97631f54aa43] <==
	I1001 19:33:56.117120       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1001 19:33:56.123910       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1001 19:33:56.123927       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1001 19:33:56.149967       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1001 19:33:56.150429       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-810000_c13513cb-5ee3-4e18-bf3e-2752af85468f!
	I1001 19:33:56.150878       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e5e9fa37-58a0-43d1-8ae0-e5c77770ce23", APIVersion:"v1", ResourceVersion:"379", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-810000_c13513cb-5ee3-4e18-bf3e-2752af85468f became leader
	I1001 19:33:56.252683       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-810000_c13513cb-5ee3-4e18-bf3e-2752af85468f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-810000 -n running-upgrade-810000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-810000 -n running-upgrade-810000: exit status 2 (15.64529525s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-810000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-810000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-810000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-arm64 delete -p running-upgrade-810000: (1.216162167s)
--- FAIL: TestRunningBinaryUpgrade (621.75s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.69s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-889000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-889000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.705126458s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-889000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-889000" primary control-plane node in "kubernetes-upgrade-889000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-889000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:30:54.198938    4630 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:30:54.199094    4630 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:30:54.199098    4630 out.go:358] Setting ErrFile to fd 2...
	I1001 12:30:54.199100    4630 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:30:54.199232    4630 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:30:54.200367    4630 out.go:352] Setting JSON to false
	I1001 12:30:54.216698    4630 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3619,"bootTime":1727807435,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1001 12:30:54.216767    4630 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 12:30:54.222966    4630 out.go:177] * [kubernetes-upgrade-889000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 12:30:54.230944    4630 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 12:30:54.230996    4630 notify.go:220] Checking for updates...
	I1001 12:30:54.239726    4630 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	I1001 12:30:54.243831    4630 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 12:30:54.247972    4630 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 12:30:54.250999    4630 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	I1001 12:30:54.253941    4630 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 12:30:54.257363    4630 config.go:182] Loaded profile config "multinode-301000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:30:54.257428    4630 config.go:182] Loaded profile config "running-upgrade-810000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1001 12:30:54.257474    4630 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 12:30:54.261899    4630 out.go:177] * Using the qemu2 driver based on user configuration
	I1001 12:30:54.268991    4630 start.go:297] selected driver: qemu2
	I1001 12:30:54.269004    4630 start.go:901] validating driver "qemu2" against <nil>
	I1001 12:30:54.269012    4630 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 12:30:54.271385    4630 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 12:30:54.273926    4630 out.go:177] * Automatically selected the socket_vmnet network
	I1001 12:30:54.277077    4630 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1001 12:30:54.277099    4630 cni.go:84] Creating CNI manager for ""
	I1001 12:30:54.277136    4630 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1001 12:30:54.277179    4630 start.go:340] cluster config:
	{Name:kubernetes-upgrade-889000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-889000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 12:30:54.281423    4630 iso.go:125] acquiring lock: {Name:mk749d3a5db31c259cbd6465e91cf5073e7cc750 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:30:54.285827    4630 out.go:177] * Starting "kubernetes-upgrade-889000" primary control-plane node in "kubernetes-upgrade-889000" cluster
	I1001 12:30:54.293971    4630 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1001 12:30:54.294020    4630 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1001 12:30:54.294036    4630 cache.go:56] Caching tarball of preloaded images
	I1001 12:30:54.294141    4630 preload.go:172] Found /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 12:30:54.294147    4630 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1001 12:30:54.294220    4630 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/kubernetes-upgrade-889000/config.json ...
	I1001 12:30:54.294231    4630 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/kubernetes-upgrade-889000/config.json: {Name:mk375c6d7eb9c2f87db770796d43d2c584b547ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 12:30:54.294500    4630 start.go:360] acquireMachinesLock for kubernetes-upgrade-889000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:30:54.294534    4630 start.go:364] duration metric: took 27.875µs to acquireMachinesLock for "kubernetes-upgrade-889000"
	I1001 12:30:54.294545    4630 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-889000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-889000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 12:30:54.294583    4630 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 12:30:54.301979    4630 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 12:30:54.317487    4630 start.go:159] libmachine.API.Create for "kubernetes-upgrade-889000" (driver="qemu2")
	I1001 12:30:54.317520    4630 client.go:168] LocalClient.Create starting
	I1001 12:30:54.317592    4630 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem
	I1001 12:30:54.317626    4630 main.go:141] libmachine: Decoding PEM data...
	I1001 12:30:54.317634    4630 main.go:141] libmachine: Parsing certificate...
	I1001 12:30:54.317675    4630 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/cert.pem
	I1001 12:30:54.317698    4630 main.go:141] libmachine: Decoding PEM data...
	I1001 12:30:54.317709    4630 main.go:141] libmachine: Parsing certificate...
	I1001 12:30:54.318085    4630 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19736-1073/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 12:30:54.484654    4630 main.go:141] libmachine: Creating SSH key...
	I1001 12:30:54.530784    4630 main.go:141] libmachine: Creating Disk image...
	I1001 12:30:54.530796    4630 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 12:30:54.530994    4630 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kubernetes-upgrade-889000/disk.qcow2.raw /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kubernetes-upgrade-889000/disk.qcow2
	I1001 12:30:54.540168    4630 main.go:141] libmachine: STDOUT: 
	I1001 12:30:54.540184    4630 main.go:141] libmachine: STDERR: 
	I1001 12:30:54.540246    4630 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kubernetes-upgrade-889000/disk.qcow2 +20000M
	I1001 12:30:54.548211    4630 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 12:30:54.548229    4630 main.go:141] libmachine: STDERR: 
	I1001 12:30:54.548247    4630 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kubernetes-upgrade-889000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kubernetes-upgrade-889000/disk.qcow2
	I1001 12:30:54.548253    4630 main.go:141] libmachine: Starting QEMU VM...
	I1001 12:30:54.548265    4630 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:30:54.548307    4630 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kubernetes-upgrade-889000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kubernetes-upgrade-889000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kubernetes-upgrade-889000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:0c:f8:74:52:67 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kubernetes-upgrade-889000/disk.qcow2
	I1001 12:30:54.549956    4630 main.go:141] libmachine: STDOUT: 
	I1001 12:30:54.549976    4630 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:30:54.549999    4630 client.go:171] duration metric: took 232.47925ms to LocalClient.Create
	I1001 12:30:56.552000    4630 start.go:128] duration metric: took 2.25745025s to createHost
	I1001 12:30:56.552017    4630 start.go:83] releasing machines lock for "kubernetes-upgrade-889000", held for 2.257524375s
	W1001 12:30:56.552037    4630 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:30:56.555855    4630 out.go:177] * Deleting "kubernetes-upgrade-889000" in qemu2 ...
	W1001 12:30:56.566342    4630 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:30:56.566353    4630 start.go:729] Will try again in 5 seconds ...
	I1001 12:31:01.568485    4630 start.go:360] acquireMachinesLock for kubernetes-upgrade-889000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:31:01.568997    4630 start.go:364] duration metric: took 419.75µs to acquireMachinesLock for "kubernetes-upgrade-889000"
	I1001 12:31:01.569136    4630 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-889000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-889000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 12:31:01.569333    4630 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 12:31:01.577373    4630 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 12:31:01.610976    4630 start.go:159] libmachine.API.Create for "kubernetes-upgrade-889000" (driver="qemu2")
	I1001 12:31:01.611028    4630 client.go:168] LocalClient.Create starting
	I1001 12:31:01.611137    4630 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem
	I1001 12:31:01.611222    4630 main.go:141] libmachine: Decoding PEM data...
	I1001 12:31:01.611237    4630 main.go:141] libmachine: Parsing certificate...
	I1001 12:31:01.611298    4630 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/cert.pem
	I1001 12:31:01.611337    4630 main.go:141] libmachine: Decoding PEM data...
	I1001 12:31:01.611350    4630 main.go:141] libmachine: Parsing certificate...
	I1001 12:31:01.611842    4630 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19736-1073/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 12:31:01.782910    4630 main.go:141] libmachine: Creating SSH key...
	I1001 12:31:01.810205    4630 main.go:141] libmachine: Creating Disk image...
	I1001 12:31:01.810211    4630 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 12:31:01.810393    4630 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kubernetes-upgrade-889000/disk.qcow2.raw /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kubernetes-upgrade-889000/disk.qcow2
	I1001 12:31:01.819467    4630 main.go:141] libmachine: STDOUT: 
	I1001 12:31:01.819483    4630 main.go:141] libmachine: STDERR: 
	I1001 12:31:01.819537    4630 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kubernetes-upgrade-889000/disk.qcow2 +20000M
	I1001 12:31:01.827406    4630 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 12:31:01.827422    4630 main.go:141] libmachine: STDERR: 
	I1001 12:31:01.827438    4630 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kubernetes-upgrade-889000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kubernetes-upgrade-889000/disk.qcow2
	I1001 12:31:01.827443    4630 main.go:141] libmachine: Starting QEMU VM...
	I1001 12:31:01.827459    4630 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:31:01.827487    4630 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kubernetes-upgrade-889000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kubernetes-upgrade-889000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kubernetes-upgrade-889000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:02:a7:3f:26:fa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kubernetes-upgrade-889000/disk.qcow2
	I1001 12:31:01.829084    4630 main.go:141] libmachine: STDOUT: 
	I1001 12:31:01.829099    4630 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:31:01.829113    4630 client.go:171] duration metric: took 218.083125ms to LocalClient.Create
	I1001 12:31:03.831386    4630 start.go:128] duration metric: took 2.262055s to createHost
	I1001 12:31:03.831467    4630 start.go:83] releasing machines lock for "kubernetes-upgrade-889000", held for 2.262483791s
	W1001 12:31:03.831838    4630 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-889000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-889000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:31:03.845536    4630 out.go:201] 
	W1001 12:31:03.848654    4630 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 12:31:03.848708    4630 out.go:270] * 
	* 
	W1001 12:31:03.851231    4630 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 12:31:03.861466    4630 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-889000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-889000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-889000: (3.604686667s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-889000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-889000 status --format={{.Host}}: exit status 7 (46.92ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-889000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-889000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.191400042s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-889000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-889000" primary control-plane node in "kubernetes-upgrade-889000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-889000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-889000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:31:07.558230    4668 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:31:07.558378    4668 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:31:07.558382    4668 out.go:358] Setting ErrFile to fd 2...
	I1001 12:31:07.558384    4668 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:31:07.558494    4668 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:31:07.559543    4668 out.go:352] Setting JSON to false
	I1001 12:31:07.576116    4668 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3632,"bootTime":1727807435,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1001 12:31:07.576200    4668 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 12:31:07.583009    4668 out.go:177] * [kubernetes-upgrade-889000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 12:31:07.592009    4668 notify.go:220] Checking for updates...
	I1001 12:31:07.596029    4668 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 12:31:07.600929    4668 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	I1001 12:31:07.605003    4668 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 12:31:07.608052    4668 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 12:31:07.611901    4668 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	I1001 12:31:07.618954    4668 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 12:31:07.622266    4668 config.go:182] Loaded profile config "kubernetes-upgrade-889000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1001 12:31:07.622517    4668 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 12:31:07.627086    4668 out.go:177] * Using the qemu2 driver based on existing profile
	I1001 12:31:07.633947    4668 start.go:297] selected driver: qemu2
	I1001 12:31:07.633953    4668 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-889000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-889000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 12:31:07.633997    4668 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 12:31:07.636089    4668 cni.go:84] Creating CNI manager for ""
	I1001 12:31:07.636122    4668 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 12:31:07.636148    4668 start.go:340] cluster config:
	{Name:kubernetes-upgrade-889000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-889000 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMne
tClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 12:31:07.639411    4668 iso.go:125] acquiring lock: {Name:mk749d3a5db31c259cbd6465e91cf5073e7cc750 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:31:07.647940    4668 out.go:177] * Starting "kubernetes-upgrade-889000" primary control-plane node in "kubernetes-upgrade-889000" cluster
	I1001 12:31:07.651983    4668 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 12:31:07.651999    4668 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1001 12:31:07.652009    4668 cache.go:56] Caching tarball of preloaded images
	I1001 12:31:07.652087    4668 preload.go:172] Found /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 12:31:07.652092    4668 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1001 12:31:07.652150    4668 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/kubernetes-upgrade-889000/config.json ...
	I1001 12:31:07.652639    4668 start.go:360] acquireMachinesLock for kubernetes-upgrade-889000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:31:07.652670    4668 start.go:364] duration metric: took 24.958µs to acquireMachinesLock for "kubernetes-upgrade-889000"
	I1001 12:31:07.652678    4668 start.go:96] Skipping create...Using existing machine configuration
	I1001 12:31:07.652683    4668 fix.go:54] fixHost starting: 
	I1001 12:31:07.652795    4668 fix.go:112] recreateIfNeeded on kubernetes-upgrade-889000: state=Stopped err=<nil>
	W1001 12:31:07.652803    4668 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 12:31:07.656950    4668 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-889000" ...
	I1001 12:31:07.664780    4668 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:31:07.664816    4668 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kubernetes-upgrade-889000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kubernetes-upgrade-889000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kubernetes-upgrade-889000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:02:a7:3f:26:fa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kubernetes-upgrade-889000/disk.qcow2
	I1001 12:31:07.666663    4668 main.go:141] libmachine: STDOUT: 
	I1001 12:31:07.666679    4668 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:31:07.666707    4668 fix.go:56] duration metric: took 14.023ms for fixHost
	I1001 12:31:07.666712    4668 start.go:83] releasing machines lock for "kubernetes-upgrade-889000", held for 14.038042ms
	W1001 12:31:07.666718    4668 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 12:31:07.666754    4668 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:31:07.666759    4668 start.go:729] Will try again in 5 seconds ...
	I1001 12:31:12.668739    4668 start.go:360] acquireMachinesLock for kubernetes-upgrade-889000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:31:12.668851    4668 start.go:364] duration metric: took 86.583µs to acquireMachinesLock for "kubernetes-upgrade-889000"
	I1001 12:31:12.668868    4668 start.go:96] Skipping create...Using existing machine configuration
	I1001 12:31:12.668872    4668 fix.go:54] fixHost starting: 
	I1001 12:31:12.669020    4668 fix.go:112] recreateIfNeeded on kubernetes-upgrade-889000: state=Stopped err=<nil>
	W1001 12:31:12.669026    4668 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 12:31:12.674365    4668 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-889000" ...
	I1001 12:31:12.682386    4668 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:31:12.682432    4668 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kubernetes-upgrade-889000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kubernetes-upgrade-889000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kubernetes-upgrade-889000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:02:a7:3f:26:fa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kubernetes-upgrade-889000/disk.qcow2
	I1001 12:31:12.684733    4668 main.go:141] libmachine: STDOUT: 
	I1001 12:31:12.684747    4668 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:31:12.684775    4668 fix.go:56] duration metric: took 15.903333ms for fixHost
	I1001 12:31:12.684780    4668 start.go:83] releasing machines lock for "kubernetes-upgrade-889000", held for 15.924083ms
	W1001 12:31:12.684817    4668 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-889000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-889000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:31:12.693292    4668 out.go:201] 
	W1001 12:31:12.698345    4668 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 12:31:12.698357    4668 out.go:270] * 
	* 
	W1001 12:31:12.698885    4668 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 12:31:12.712277    4668 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-889000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-889000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-889000 version --output=json: exit status 1 (27.868ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-889000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:629: *** TestKubernetesUpgrade FAILED at 2024-10-01 12:31:12.749613 -0700 PDT m=+2703.595754668
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-889000 -n kubernetes-upgrade-889000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-889000 -n kubernetes-upgrade-889000: exit status 7 (29.805041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-889000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-889000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-889000
--- FAIL: TestKubernetesUpgrade (18.69s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.49s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
E1001 12:27:15.189935    1595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/functional-755000/client.crt: no such file or directory" logger="UnhandledError"
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19736
- KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current4007521343/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.49s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.13s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19736
- KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1962203638/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.13s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (582.77s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2299340835 start -p stopped-upgrade-340000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2299340835 start -p stopped-upgrade-340000 --memory=2200 --vm-driver=qemu2 : (47.534326583s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2299340835 -p stopped-upgrade-340000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2299340835 -p stopped-upgrade-340000 stop: (12.1074565s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-340000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E1001 12:35:42.728561    1595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/addons-075000/client.crt: no such file or directory" logger="UnhandledError"
E1001 12:35:52.077989    1595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/functional-755000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-340000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m43.023368542s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-340000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-340000" primary control-plane node in "stopped-upgrade-340000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-340000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:32:17.249419    4721 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:32:17.249565    4721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:32:17.249569    4721 out.go:358] Setting ErrFile to fd 2...
	I1001 12:32:17.249572    4721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:32:17.249742    4721 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:32:17.250885    4721 out.go:352] Setting JSON to false
	I1001 12:32:17.269490    4721 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3702,"bootTime":1727807435,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1001 12:32:17.269574    4721 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 12:32:17.272199    4721 out.go:177] * [stopped-upgrade-340000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 12:32:17.280216    4721 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 12:32:17.280271    4721 notify.go:220] Checking for updates...
	I1001 12:32:17.287122    4721 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	I1001 12:32:17.290170    4721 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 12:32:17.293091    4721 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 12:32:17.296147    4721 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	I1001 12:32:17.299180    4721 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 12:32:17.302373    4721 config.go:182] Loaded profile config "stopped-upgrade-340000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1001 12:32:17.306126    4721 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1001 12:32:17.309203    4721 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 12:32:17.313093    4721 out.go:177] * Using the qemu2 driver based on existing profile
	I1001 12:32:17.320178    4721 start.go:297] selected driver: qemu2
	I1001 12:32:17.320184    4721 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-340000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50511 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-340000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1001 12:32:17.320245    4721 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 12:32:17.322859    4721 cni.go:84] Creating CNI manager for ""
	I1001 12:32:17.322891    4721 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 12:32:17.322918    4721 start.go:340] cluster config:
	{Name:stopped-upgrade-340000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50511 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-340000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1001 12:32:17.322970    4721 iso.go:125] acquiring lock: {Name:mk749d3a5db31c259cbd6465e91cf5073e7cc750 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:32:17.331177    4721 out.go:177] * Starting "stopped-upgrade-340000" primary control-plane node in "stopped-upgrade-340000" cluster
	I1001 12:32:17.334109    4721 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1001 12:32:17.334127    4721 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1001 12:32:17.334133    4721 cache.go:56] Caching tarball of preloaded images
	I1001 12:32:17.334176    4721 preload.go:172] Found /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 12:32:17.334182    4721 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1001 12:32:17.334231    4721 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/stopped-upgrade-340000/config.json ...
	I1001 12:32:17.334712    4721 start.go:360] acquireMachinesLock for stopped-upgrade-340000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:32:17.334747    4721 start.go:364] duration metric: took 27.291µs to acquireMachinesLock for "stopped-upgrade-340000"
	I1001 12:32:17.334757    4721 start.go:96] Skipping create...Using existing machine configuration
	I1001 12:32:17.334762    4721 fix.go:54] fixHost starting: 
	I1001 12:32:17.334881    4721 fix.go:112] recreateIfNeeded on stopped-upgrade-340000: state=Stopped err=<nil>
	W1001 12:32:17.334891    4721 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 12:32:17.338162    4721 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-340000" ...
	I1001 12:32:17.345118    4721 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:32:17.345193    4721 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/stopped-upgrade-340000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/stopped-upgrade-340000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/stopped-upgrade-340000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50476-:22,hostfwd=tcp::50477-:2376,hostname=stopped-upgrade-340000 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/stopped-upgrade-340000/disk.qcow2
	I1001 12:32:17.394095    4721 main.go:141] libmachine: STDOUT: 
	I1001 12:32:17.394128    4721 main.go:141] libmachine: STDERR: 
	I1001 12:32:17.394134    4721 main.go:141] libmachine: Waiting for VM to start (ssh -p 50476 docker@127.0.0.1)...
	I1001 12:32:37.139627    4721 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/stopped-upgrade-340000/config.json ...
	I1001 12:32:37.140523    4721 machine.go:93] provisionDockerMachine start ...
	I1001 12:32:37.140740    4721 main.go:141] libmachine: Using SSH client type: native
	I1001 12:32:37.141289    4721 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102879c00] 0x10287c440 <nil>  [] 0s} localhost 50476 <nil> <nil>}
	I1001 12:32:37.141303    4721 main.go:141] libmachine: About to run SSH command:
	hostname
	I1001 12:32:37.219434    4721 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1001 12:32:37.219470    4721 buildroot.go:166] provisioning hostname "stopped-upgrade-340000"
	I1001 12:32:37.219624    4721 main.go:141] libmachine: Using SSH client type: native
	I1001 12:32:37.219854    4721 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102879c00] 0x10287c440 <nil>  [] 0s} localhost 50476 <nil> <nil>}
	I1001 12:32:37.219865    4721 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-340000 && echo "stopped-upgrade-340000" | sudo tee /etc/hostname
	I1001 12:32:37.285322    4721 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-340000
	
	I1001 12:32:37.285419    4721 main.go:141] libmachine: Using SSH client type: native
	I1001 12:32:37.285600    4721 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102879c00] 0x10287c440 <nil>  [] 0s} localhost 50476 <nil> <nil>}
	I1001 12:32:37.285611    4721 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-340000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-340000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-340000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 12:32:37.343358    4721 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 12:32:37.343373    4721 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19736-1073/.minikube CaCertPath:/Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19736-1073/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19736-1073/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19736-1073/.minikube}
	I1001 12:32:37.343383    4721 buildroot.go:174] setting up certificates
	I1001 12:32:37.343388    4721 provision.go:84] configureAuth start
	I1001 12:32:37.343397    4721 provision.go:143] copyHostCerts
	I1001 12:32:37.343489    4721 exec_runner.go:144] found /Users/jenkins/minikube-integration/19736-1073/.minikube/ca.pem, removing ...
	I1001 12:32:37.343496    4721 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19736-1073/.minikube/ca.pem
	I1001 12:32:37.343638    4721 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19736-1073/.minikube/ca.pem (1078 bytes)
	I1001 12:32:37.343852    4721 exec_runner.go:144] found /Users/jenkins/minikube-integration/19736-1073/.minikube/cert.pem, removing ...
	I1001 12:32:37.343856    4721 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19736-1073/.minikube/cert.pem
	I1001 12:32:37.343956    4721 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19736-1073/.minikube/cert.pem (1123 bytes)
	I1001 12:32:37.344544    4721 exec_runner.go:144] found /Users/jenkins/minikube-integration/19736-1073/.minikube/key.pem, removing ...
	I1001 12:32:37.344550    4721 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19736-1073/.minikube/key.pem
	I1001 12:32:37.344626    4721 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19736-1073/.minikube/key.pem (1675 bytes)
	I1001 12:32:37.344742    4721 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-340000 san=[127.0.0.1 localhost minikube stopped-upgrade-340000]
	I1001 12:32:37.422459    4721 provision.go:177] copyRemoteCerts
	I1001 12:32:37.422493    4721 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 12:32:37.422501    4721 sshutil.go:53] new ssh client: &{IP:localhost Port:50476 SSHKeyPath:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/stopped-upgrade-340000/id_rsa Username:docker}
	I1001 12:32:37.452283    4721 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1001 12:32:37.459451    4721 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1001 12:32:37.465789    4721 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1001 12:32:37.472931    4721 provision.go:87] duration metric: took 129.536708ms to configureAuth
	I1001 12:32:37.472941    4721 buildroot.go:189] setting minikube options for container-runtime
	I1001 12:32:37.473037    4721 config.go:182] Loaded profile config "stopped-upgrade-340000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1001 12:32:37.473081    4721 main.go:141] libmachine: Using SSH client type: native
	I1001 12:32:37.473171    4721 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102879c00] 0x10287c440 <nil>  [] 0s} localhost 50476 <nil> <nil>}
	I1001 12:32:37.473176    4721 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1001 12:32:37.522043    4721 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1001 12:32:37.522053    4721 buildroot.go:70] root file system type: tmpfs
	I1001 12:32:37.522098    4721 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1001 12:32:37.522167    4721 main.go:141] libmachine: Using SSH client type: native
	I1001 12:32:37.522271    4721 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102879c00] 0x10287c440 <nil>  [] 0s} localhost 50476 <nil> <nil>}
	I1001 12:32:37.522304    4721 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1001 12:32:37.574454    4721 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1001 12:32:37.574515    4721 main.go:141] libmachine: Using SSH client type: native
	I1001 12:32:37.574622    4721 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102879c00] 0x10287c440 <nil>  [] 0s} localhost 50476 <nil> <nil>}
	I1001 12:32:37.574633    4721 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1001 12:32:37.919367    4721 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1001 12:32:37.919383    4721 machine.go:96] duration metric: took 778.864334ms to provisionDockerMachine
	I1001 12:32:37.919389    4721 start.go:293] postStartSetup for "stopped-upgrade-340000" (driver="qemu2")
	I1001 12:32:37.919396    4721 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 12:32:37.919465    4721 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 12:32:37.919475    4721 sshutil.go:53] new ssh client: &{IP:localhost Port:50476 SSHKeyPath:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/stopped-upgrade-340000/id_rsa Username:docker}
	I1001 12:32:37.945335    4721 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 12:32:37.946510    4721 info.go:137] Remote host: Buildroot 2021.02.12
	I1001 12:32:37.946518    4721 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19736-1073/.minikube/addons for local assets ...
	I1001 12:32:37.946818    4721 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19736-1073/.minikube/files for local assets ...
	I1001 12:32:37.946967    4721 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19736-1073/.minikube/files/etc/ssl/certs/15952.pem -> 15952.pem in /etc/ssl/certs
	I1001 12:32:37.947109    4721 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 12:32:37.949838    4721 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/files/etc/ssl/certs/15952.pem --> /etc/ssl/certs/15952.pem (1708 bytes)
	I1001 12:32:37.956823    4721 start.go:296] duration metric: took 37.429042ms for postStartSetup
	I1001 12:32:37.956837    4721 fix.go:56] duration metric: took 20.622502125s for fixHost
	I1001 12:32:37.956878    4721 main.go:141] libmachine: Using SSH client type: native
	I1001 12:32:37.956984    4721 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102879c00] 0x10287c440 <nil>  [] 0s} localhost 50476 <nil> <nil>}
	I1001 12:32:37.956989    4721 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 12:32:38.006188    4721 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727811157.865323046
	
	I1001 12:32:38.006195    4721 fix.go:216] guest clock: 1727811157.865323046
	I1001 12:32:38.006199    4721 fix.go:229] Guest: 2024-10-01 12:32:37.865323046 -0700 PDT Remote: 2024-10-01 12:32:37.956839 -0700 PDT m=+20.737616168 (delta=-91.515954ms)
	I1001 12:32:38.006209    4721 fix.go:200] guest clock delta is within tolerance: -91.515954ms
	I1001 12:32:38.006212    4721 start.go:83] releasing machines lock for "stopped-upgrade-340000", held for 20.671887041s
	I1001 12:32:38.006278    4721 ssh_runner.go:195] Run: cat /version.json
	I1001 12:32:38.006288    4721 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 12:32:38.006287    4721 sshutil.go:53] new ssh client: &{IP:localhost Port:50476 SSHKeyPath:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/stopped-upgrade-340000/id_rsa Username:docker}
	I1001 12:32:38.006303    4721 sshutil.go:53] new ssh client: &{IP:localhost Port:50476 SSHKeyPath:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/stopped-upgrade-340000/id_rsa Username:docker}
	W1001 12:32:38.006886    4721 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50476: connect: connection refused
	I1001 12:32:38.006904    4721 retry.go:31] will retry after 215.635445ms: dial tcp [::1]:50476: connect: connection refused
	W1001 12:32:38.030919    4721 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1001 12:32:38.030959    4721 ssh_runner.go:195] Run: systemctl --version
	I1001 12:32:38.032776    4721 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 12:32:38.034443    4721 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 12:32:38.034471    4721 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1001 12:32:38.037433    4721 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1001 12:32:38.042189    4721 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1001 12:32:38.042199    4721 start.go:495] detecting cgroup driver to use...
	I1001 12:32:38.042276    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 12:32:38.049237    4721 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1001 12:32:38.052583    4721 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1001 12:32:38.055664    4721 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1001 12:32:38.055692    4721 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1001 12:32:38.058609    4721 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1001 12:32:38.061603    4721 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1001 12:32:38.064978    4721 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1001 12:32:38.068463    4721 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 12:32:38.071651    4721 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1001 12:32:38.074421    4721 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1001 12:32:38.077610    4721 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1001 12:32:38.080904    4721 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 12:32:38.083558    4721 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 12:32:38.086243    4721 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 12:32:38.175252    4721 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1001 12:32:38.181501    4721 start.go:495] detecting cgroup driver to use...
	I1001 12:32:38.181571    4721 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1001 12:32:38.187683    4721 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 12:32:38.192667    4721 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 12:32:38.200444    4721 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 12:32:38.205324    4721 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1001 12:32:38.210398    4721 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1001 12:32:38.262167    4721 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1001 12:32:38.296547    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 12:32:38.309076    4721 ssh_runner.go:195] Run: which cri-dockerd
	I1001 12:32:38.310796    4721 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1001 12:32:38.314043    4721 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1001 12:32:38.319653    4721 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1001 12:32:38.398087    4721 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1001 12:32:38.462381    4721 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1001 12:32:38.462443    4721 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1001 12:32:38.467504    4721 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 12:32:38.546971    4721 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1001 12:32:39.673026    4721 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.126062292s)
	I1001 12:32:39.673092    4721 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1001 12:32:39.677335    4721 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1001 12:32:39.684002    4721 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1001 12:32:39.688561    4721 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1001 12:32:39.765064    4721 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1001 12:32:39.830042    4721 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 12:32:39.911068    4721 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1001 12:32:39.916433    4721 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1001 12:32:39.920672    4721 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 12:32:40.012963    4721 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1001 12:32:40.052465    4721 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1001 12:32:40.052558    4721 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1001 12:32:40.055292    4721 start.go:563] Will wait 60s for crictl version
	I1001 12:32:40.055355    4721 ssh_runner.go:195] Run: which crictl
	I1001 12:32:40.056713    4721 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 12:32:40.070933    4721 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1001 12:32:40.071019    4721 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1001 12:32:40.087067    4721 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1001 12:32:40.102693    4721 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1001 12:32:40.102844    4721 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1001 12:32:40.104095    4721 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 12:32:40.107642    4721 kubeadm.go:883] updating cluster {Name:stopped-upgrade-340000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50511 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-340000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1001 12:32:40.107687    4721 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1001 12:32:40.107742    4721 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1001 12:32:40.118384    4721 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1001 12:32:40.118391    4721 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1001 12:32:40.118439    4721 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1001 12:32:40.121683    4721 ssh_runner.go:195] Run: which lz4
	I1001 12:32:40.123039    4721 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1001 12:32:40.124227    4721 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1001 12:32:40.124236    4721 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1001 12:32:41.044752    4721 docker.go:649] duration metric: took 921.768334ms to copy over tarball
	I1001 12:32:41.044823    4721 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1001 12:32:42.206900    4721 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.162089375s)
	I1001 12:32:42.206914    4721 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1001 12:32:42.222464    4721 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1001 12:32:42.225710    4721 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1001 12:32:42.230932    4721 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 12:32:42.311899    4721 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1001 12:32:44.007076    4721 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.6951955s)
	I1001 12:32:44.007196    4721 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1001 12:32:44.019804    4721 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1001 12:32:44.019817    4721 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1001 12:32:44.019823    4721 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1001 12:32:44.023647    4721 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1001 12:32:44.025363    4721 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 12:32:44.027322    4721 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1001 12:32:44.027721    4721 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1001 12:32:44.029853    4721 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1001 12:32:44.029937    4721 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 12:32:44.031730    4721 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1001 12:32:44.031845    4721 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1001 12:32:44.033353    4721 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1001 12:32:44.034059    4721 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1001 12:32:44.034539    4721 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1001 12:32:44.034738    4721 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1001 12:32:44.035578    4721 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1001 12:32:44.035641    4721 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1001 12:32:44.036547    4721 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1001 12:32:44.037106    4721 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1001 12:32:45.939441    4721 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1001 12:32:45.981489    4721 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1001 12:32:45.981543    4721 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1001 12:32:45.981679    4721 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1001 12:32:46.002062    4721 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1001 12:32:46.032580    4721 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1001 12:32:46.048850    4721 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1001 12:32:46.048884    4721 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1001 12:32:46.048967    4721 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1001 12:32:46.061925    4721 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1001 12:32:46.068122    4721 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1001 12:32:46.080009    4721 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1001 12:32:46.080031    4721 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1001 12:32:46.080103    4721 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1001 12:32:46.089346    4721 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1001 12:32:46.091087    4721 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1001 12:32:46.099990    4721 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1001 12:32:46.100008    4721 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1001 12:32:46.100077    4721 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1001 12:32:46.111302    4721 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	W1001 12:32:46.387348    4721 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1001 12:32:46.387552    4721 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 12:32:46.401893    4721 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1001 12:32:46.401922    4721 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 12:32:46.402006    4721 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 12:32:46.416483    4721 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1001 12:32:46.416616    4721 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1001 12:32:46.418431    4721 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1001 12:32:46.418442    4721 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1001 12:32:46.445508    4721 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1001 12:32:46.445525    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	W1001 12:32:46.603634    4721 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1001 12:32:46.603776    4721 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1001 12:32:46.605184    4721 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1001 12:32:46.607689    4721 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1001 12:32:46.696241    4721 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1001 12:32:46.696284    4721 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1001 12:32:46.696303    4721 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1001 12:32:46.696341    4721 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1001 12:32:46.696351    4721 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1001 12:32:46.696353    4721 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1001 12:32:46.696364    4721 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1001 12:32:46.696372    4721 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1001 12:32:46.696389    4721 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1001 12:32:46.696400    4721 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1001 12:32:46.716320    4721 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1001 12:32:46.716463    4721 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1001 12:32:46.719814    4721 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1001 12:32:46.719825    4721 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1001 12:32:46.719843    4721 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1001 12:32:46.719851    4721 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1001 12:32:46.719918    4721 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1001 12:32:46.723674    4721 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1001 12:32:46.723696    4721 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1001 12:32:46.735613    4721 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1001 12:32:46.735626    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1001 12:32:46.783615    4721 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I1001 12:32:46.783644    4721 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1001 12:32:46.783651    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1001 12:32:46.821623    4721 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1001 12:32:46.821664    4721 cache_images.go:92] duration metric: took 2.80189325s to LoadCachedImages
	W1001 12:32:46.821707    4721 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0: no such file or directory
	I1001 12:32:46.821713    4721 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1001 12:32:46.821772    4721 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-340000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-340000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 12:32:46.821862    4721 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1001 12:32:46.834571    4721 cni.go:84] Creating CNI manager for ""
	I1001 12:32:46.834585    4721 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 12:32:46.834593    4721 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1001 12:32:46.834603    4721 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-340000 NodeName:stopped-upgrade-340000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1001 12:32:46.834676    4721 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-340000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1001 12:32:46.834736    4721 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1001 12:32:46.837512    4721 binaries.go:44] Found k8s binaries, skipping transfer
	I1001 12:32:46.837547    4721 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1001 12:32:46.840587    4721 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1001 12:32:46.845672    4721 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 12:32:46.850676    4721 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1001 12:32:46.855818    4721 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1001 12:32:46.857007    4721 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 12:32:46.861001    4721 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 12:32:46.938753    4721 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 12:32:46.944028    4721 certs.go:68] Setting up /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/stopped-upgrade-340000 for IP: 10.0.2.15
	I1001 12:32:46.944041    4721 certs.go:194] generating shared ca certs ...
	I1001 12:32:46.944050    4721 certs.go:226] acquiring lock for ca certs: {Name:mk17296519b35110345119718efed98a68b82ed2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 12:32:46.944213    4721 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19736-1073/.minikube/ca.key
	I1001 12:32:46.944265    4721 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19736-1073/.minikube/proxy-client-ca.key
	I1001 12:32:46.944274    4721 certs.go:256] generating profile certs ...
	I1001 12:32:46.944348    4721 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/stopped-upgrade-340000/client.key
	I1001 12:32:46.944367    4721 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/stopped-upgrade-340000/apiserver.key.0d9cfbc7
	I1001 12:32:46.944379    4721 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/stopped-upgrade-340000/apiserver.crt.0d9cfbc7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1001 12:32:47.041919    4721 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/stopped-upgrade-340000/apiserver.crt.0d9cfbc7 ...
	I1001 12:32:47.041930    4721 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/stopped-upgrade-340000/apiserver.crt.0d9cfbc7: {Name:mk42a3009433a7b67664e87e44a566f172d07094 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 12:32:47.049953    4721 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/stopped-upgrade-340000/apiserver.key.0d9cfbc7 ...
	I1001 12:32:47.049960    4721 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/stopped-upgrade-340000/apiserver.key.0d9cfbc7: {Name:mka5194fa90f8ab5483c5dfcbae6295edf488a16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 12:32:47.050129    4721 certs.go:381] copying /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/stopped-upgrade-340000/apiserver.crt.0d9cfbc7 -> /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/stopped-upgrade-340000/apiserver.crt
	I1001 12:32:47.052341    4721 certs.go:385] copying /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/stopped-upgrade-340000/apiserver.key.0d9cfbc7 -> /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/stopped-upgrade-340000/apiserver.key
	I1001 12:32:47.052507    4721 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/stopped-upgrade-340000/proxy-client.key
	I1001 12:32:47.052641    4721 certs.go:484] found cert: /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/1595.pem (1338 bytes)
	W1001 12:32:47.052671    4721 certs.go:480] ignoring /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/1595_empty.pem, impossibly tiny 0 bytes
	I1001 12:32:47.052677    4721 certs.go:484] found cert: /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca-key.pem (1675 bytes)
	I1001 12:32:47.052703    4721 certs.go:484] found cert: /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem (1078 bytes)
	I1001 12:32:47.052728    4721 certs.go:484] found cert: /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/cert.pem (1123 bytes)
	I1001 12:32:47.052756    4721 certs.go:484] found cert: /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/key.pem (1675 bytes)
	I1001 12:32:47.052807    4721 certs.go:484] found cert: /Users/jenkins/minikube-integration/19736-1073/.minikube/files/etc/ssl/certs/15952.pem (1708 bytes)
	I1001 12:32:47.053184    4721 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 12:32:47.059842    4721 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 12:32:47.066103    4721 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 12:32:47.073046    4721 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1001 12:32:47.080530    4721 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/stopped-upgrade-340000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1001 12:32:47.087708    4721 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/stopped-upgrade-340000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1001 12:32:47.094529    4721 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/stopped-upgrade-340000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 12:32:47.101362    4721 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/stopped-upgrade-340000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1001 12:32:47.109320    4721 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/1595.pem --> /usr/share/ca-certificates/1595.pem (1338 bytes)
	I1001 12:32:47.117011    4721 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/files/etc/ssl/certs/15952.pem --> /usr/share/ca-certificates/15952.pem (1708 bytes)
	I1001 12:32:47.124679    4721 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19736-1073/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 12:32:47.132547    4721 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1001 12:32:47.138157    4721 ssh_runner.go:195] Run: openssl version
	I1001 12:32:47.140320    4721 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1595.pem && ln -fs /usr/share/ca-certificates/1595.pem /etc/ssl/certs/1595.pem"
	I1001 12:32:47.143784    4721 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1595.pem
	I1001 12:32:47.145418    4721 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 19:02 /usr/share/ca-certificates/1595.pem
	I1001 12:32:47.145449    4721 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1595.pem
	I1001 12:32:47.147488    4721 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1595.pem /etc/ssl/certs/51391683.0"
	I1001 12:32:47.150754    4721 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15952.pem && ln -fs /usr/share/ca-certificates/15952.pem /etc/ssl/certs/15952.pem"
	I1001 12:32:47.154070    4721 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15952.pem
	I1001 12:32:47.155689    4721 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 19:02 /usr/share/ca-certificates/15952.pem
	I1001 12:32:47.155720    4721 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15952.pem
	I1001 12:32:47.157723    4721 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15952.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 12:32:47.161292    4721 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 12:32:47.164996    4721 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 12:32:47.166739    4721 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 18:47 /usr/share/ca-certificates/minikubeCA.pem
	I1001 12:32:47.166772    4721 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 12:32:47.168686    4721 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 12:32:47.172185    4721 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 12:32:47.173697    4721 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1001 12:32:47.175829    4721 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1001 12:32:47.177820    4721 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1001 12:32:47.179941    4721 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1001 12:32:47.181945    4721 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1001 12:32:47.184047    4721 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1001 12:32:47.186059    4721 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-340000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50511 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-340000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1001 12:32:47.186133    4721 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1001 12:32:47.199252    4721 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1001 12:32:47.202505    4721 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1001 12:32:47.202513    4721 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1001 12:32:47.202561    4721 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1001 12:32:47.206272    4721 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1001 12:32:47.206593    4721 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-340000" does not appear in /Users/jenkins/minikube-integration/19736-1073/kubeconfig
	I1001 12:32:47.206708    4721 kubeconfig.go:62] /Users/jenkins/minikube-integration/19736-1073/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-340000" cluster setting kubeconfig missing "stopped-upgrade-340000" context setting]
	I1001 12:32:47.206923    4721 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19736-1073/kubeconfig: {Name:mkdfe60702c76fe804796a27b08676f2ebb5427f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 12:32:47.207388    4721 kapi.go:59] client config for stopped-upgrade-340000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/stopped-upgrade-340000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/stopped-upgrade-340000/client.key", CAFile:"/Users/jenkins/minikube-integration/19736-1073/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103e525d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1001 12:32:47.207749    4721 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1001 12:32:47.211050    4721 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-340000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1001 12:32:47.211057    4721 kubeadm.go:1160] stopping kube-system containers ...
	I1001 12:32:47.211119    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1001 12:32:47.224939    4721 docker.go:483] Stopping containers: [d9956cf09477 ccd8354deb5e 7ad38fcc33d6 e0f6b93f81e7 316e5a1a5aed 64bb71576196 bc78f59fb2e5 4d8a8c79d4fe]
	I1001 12:32:47.225030    4721 ssh_runner.go:195] Run: docker stop d9956cf09477 ccd8354deb5e 7ad38fcc33d6 e0f6b93f81e7 316e5a1a5aed 64bb71576196 bc78f59fb2e5 4d8a8c79d4fe
	I1001 12:32:47.236319    4721 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1001 12:32:47.242097    4721 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 12:32:47.245224    4721 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 12:32:47.245233    4721 kubeadm.go:157] found existing configuration files:
	
	I1001 12:32:47.245274    4721 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/admin.conf
	I1001 12:32:47.248355    4721 kubeadm.go:163] "https://control-plane.minikube.internal:50511" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 12:32:47.248387    4721 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 12:32:47.251637    4721 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/kubelet.conf
	I1001 12:32:47.254124    4721 kubeadm.go:163] "https://control-plane.minikube.internal:50511" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 12:32:47.254159    4721 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 12:32:47.257162    4721 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/controller-manager.conf
	I1001 12:32:47.260672    4721 kubeadm.go:163] "https://control-plane.minikube.internal:50511" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 12:32:47.260724    4721 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 12:32:47.263818    4721 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/scheduler.conf
	I1001 12:32:47.266489    4721 kubeadm.go:163] "https://control-plane.minikube.internal:50511" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 12:32:47.266536    4721 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 12:32:47.269420    4721 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 12:32:47.273021    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 12:32:47.298584    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 12:32:47.762104    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1001 12:32:47.904728    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 12:32:47.926625    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1001 12:32:47.954509    4721 api_server.go:52] waiting for apiserver process to appear ...
	I1001 12:32:47.954601    4721 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 12:32:48.456680    4721 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 12:32:48.956660    4721 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 12:32:48.960775    4721 api_server.go:72] duration metric: took 1.006288584s to wait for apiserver process to appear ...
	I1001 12:32:48.960785    4721 api_server.go:88] waiting for apiserver healthz status ...
	I1001 12:32:48.960799    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:32:53.962914    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:32:53.963008    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:32:58.963756    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:32:58.963835    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:33:03.964696    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:33:03.964789    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:33:08.966045    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:33:08.966081    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:33:13.966548    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:33:13.966643    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:33:18.968281    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:33:18.968296    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:33:23.969940    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:33:23.969992    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:33:28.972326    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:33:28.972427    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:33:33.975061    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:33:33.975108    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:33:38.976880    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:33:38.976908    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:33:43.978557    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:33:43.978600    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:33:48.976009    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:33:48.976455    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:33:49.010273    4721 logs.go:276] 2 containers: [956404de281e bc78f59fb2e5]
	I1001 12:33:49.010456    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:33:49.030201    4721 logs.go:276] 2 containers: [4d0f920ec84f 316e5a1a5aed]
	I1001 12:33:49.030308    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:33:49.045189    4721 logs.go:276] 1 containers: [d04375a2ee30]
	I1001 12:33:49.045287    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:33:49.064740    4721 logs.go:276] 2 containers: [c952b19735c2 7ad38fcc33d6]
	I1001 12:33:49.064835    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:33:49.075751    4721 logs.go:276] 1 containers: [2cdb05dca894]
	I1001 12:33:49.075834    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:33:49.086746    4721 logs.go:276] 2 containers: [ecbe68f7a6b4 d9956cf09477]
	I1001 12:33:49.086831    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:33:49.097388    4721 logs.go:276] 0 containers: []
	W1001 12:33:49.097401    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:33:49.097481    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:33:49.108032    4721 logs.go:276] 1 containers: [5cc1ba08286c]
	I1001 12:33:49.108048    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:33:49.108054    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:33:49.112203    4721 logs.go:123] Gathering logs for etcd [316e5a1a5aed] ...
	I1001 12:33:49.112212    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e5a1a5aed"
	I1001 12:33:49.127085    4721 logs.go:123] Gathering logs for kube-scheduler [c952b19735c2] ...
	I1001 12:33:49.127094    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c952b19735c2"
	I1001 12:33:49.139008    4721 logs.go:123] Gathering logs for kube-controller-manager [d9956cf09477] ...
	I1001 12:33:49.139022    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9956cf09477"
	I1001 12:33:49.151627    4721 logs.go:123] Gathering logs for kube-apiserver [bc78f59fb2e5] ...
	I1001 12:33:49.151638    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc78f59fb2e5"
	I1001 12:33:49.196291    4721 logs.go:123] Gathering logs for etcd [4d0f920ec84f] ...
	I1001 12:33:49.196301    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d0f920ec84f"
	I1001 12:33:49.210211    4721 logs.go:123] Gathering logs for coredns [d04375a2ee30] ...
	I1001 12:33:49.210222    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04375a2ee30"
	I1001 12:33:49.222063    4721 logs.go:123] Gathering logs for kube-scheduler [7ad38fcc33d6] ...
	I1001 12:33:49.222075    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad38fcc33d6"
	I1001 12:33:49.237137    4721 logs.go:123] Gathering logs for kube-controller-manager [ecbe68f7a6b4] ...
	I1001 12:33:49.237148    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecbe68f7a6b4"
	I1001 12:33:49.254161    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:33:49.254170    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:33:49.336181    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:33:49.336192    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:33:49.360840    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:33:49.360847    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:33:49.373091    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:33:49.373103    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:33:49.410430    4721 logs.go:123] Gathering logs for kube-apiserver [956404de281e] ...
	I1001 12:33:49.410441    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 956404de281e"
	I1001 12:33:49.425029    4721 logs.go:123] Gathering logs for kube-proxy [2cdb05dca894] ...
	I1001 12:33:49.425039    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cdb05dca894"
	I1001 12:33:49.436739    4721 logs.go:123] Gathering logs for storage-provisioner [5cc1ba08286c] ...
	I1001 12:33:49.436751    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc1ba08286c"
	I1001 12:33:51.947715    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:33:56.946911    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:33:56.947112    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:33:56.969522    4721 logs.go:276] 2 containers: [956404de281e bc78f59fb2e5]
	I1001 12:33:56.969649    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:33:56.984743    4721 logs.go:276] 2 containers: [4d0f920ec84f 316e5a1a5aed]
	I1001 12:33:56.984844    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:33:56.997373    4721 logs.go:276] 1 containers: [d04375a2ee30]
	I1001 12:33:56.997464    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:33:57.007746    4721 logs.go:276] 2 containers: [c952b19735c2 7ad38fcc33d6]
	I1001 12:33:57.007839    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:33:57.018527    4721 logs.go:276] 1 containers: [2cdb05dca894]
	I1001 12:33:57.018613    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:33:57.028885    4721 logs.go:276] 2 containers: [ecbe68f7a6b4 d9956cf09477]
	I1001 12:33:57.028986    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:33:57.039158    4721 logs.go:276] 0 containers: []
	W1001 12:33:57.039171    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:33:57.039241    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:33:57.049365    4721 logs.go:276] 1 containers: [5cc1ba08286c]
	I1001 12:33:57.049383    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:33:57.049389    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:33:57.087616    4721 logs.go:123] Gathering logs for coredns [d04375a2ee30] ...
	I1001 12:33:57.087628    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04375a2ee30"
	I1001 12:33:57.098633    4721 logs.go:123] Gathering logs for kube-proxy [2cdb05dca894] ...
	I1001 12:33:57.098645    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cdb05dca894"
	I1001 12:33:57.110634    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:33:57.110644    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:33:57.115118    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:33:57.115132    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:33:57.149635    4721 logs.go:123] Gathering logs for etcd [4d0f920ec84f] ...
	I1001 12:33:57.149656    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d0f920ec84f"
	I1001 12:33:57.163894    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:33:57.163908    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:33:57.176182    4721 logs.go:123] Gathering logs for kube-apiserver [956404de281e] ...
	I1001 12:33:57.176196    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 956404de281e"
	I1001 12:33:57.191558    4721 logs.go:123] Gathering logs for storage-provisioner [5cc1ba08286c] ...
	I1001 12:33:57.191575    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc1ba08286c"
	I1001 12:33:57.203337    4721 logs.go:123] Gathering logs for kube-apiserver [bc78f59fb2e5] ...
	I1001 12:33:57.203349    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc78f59fb2e5"
	I1001 12:33:57.243261    4721 logs.go:123] Gathering logs for etcd [316e5a1a5aed] ...
	I1001 12:33:57.243276    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e5a1a5aed"
	I1001 12:33:57.257959    4721 logs.go:123] Gathering logs for kube-scheduler [c952b19735c2] ...
	I1001 12:33:57.257973    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c952b19735c2"
	I1001 12:33:57.270248    4721 logs.go:123] Gathering logs for kube-scheduler [7ad38fcc33d6] ...
	I1001 12:33:57.270264    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad38fcc33d6"
	I1001 12:33:57.285598    4721 logs.go:123] Gathering logs for kube-controller-manager [ecbe68f7a6b4] ...
	I1001 12:33:57.285607    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecbe68f7a6b4"
	I1001 12:33:57.303533    4721 logs.go:123] Gathering logs for kube-controller-manager [d9956cf09477] ...
	I1001 12:33:57.303548    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9956cf09477"
	I1001 12:33:57.315269    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:33:57.315279    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:33:59.840640    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:34:04.841084    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:34:04.841326    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:34:04.857480    4721 logs.go:276] 2 containers: [956404de281e bc78f59fb2e5]
	I1001 12:34:04.857582    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:34:04.870437    4721 logs.go:276] 2 containers: [4d0f920ec84f 316e5a1a5aed]
	I1001 12:34:04.870531    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:34:04.881910    4721 logs.go:276] 1 containers: [d04375a2ee30]
	I1001 12:34:04.881993    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:34:04.892452    4721 logs.go:276] 2 containers: [c952b19735c2 7ad38fcc33d6]
	I1001 12:34:04.892545    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:34:04.902906    4721 logs.go:276] 1 containers: [2cdb05dca894]
	I1001 12:34:04.902990    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:34:04.914020    4721 logs.go:276] 2 containers: [ecbe68f7a6b4 d9956cf09477]
	I1001 12:34:04.914108    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:34:04.924131    4721 logs.go:276] 0 containers: []
	W1001 12:34:04.924143    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:34:04.924215    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:34:04.934246    4721 logs.go:276] 1 containers: [5cc1ba08286c]
	I1001 12:34:04.934265    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:34:04.934271    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:34:04.938912    4721 logs.go:123] Gathering logs for kube-controller-manager [d9956cf09477] ...
	I1001 12:34:04.938921    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9956cf09477"
	I1001 12:34:04.957623    4721 logs.go:123] Gathering logs for storage-provisioner [5cc1ba08286c] ...
	I1001 12:34:04.957639    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc1ba08286c"
	I1001 12:34:04.969549    4721 logs.go:123] Gathering logs for kube-apiserver [956404de281e] ...
	I1001 12:34:04.969559    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 956404de281e"
	I1001 12:34:04.984972    4721 logs.go:123] Gathering logs for kube-apiserver [bc78f59fb2e5] ...
	I1001 12:34:04.984984    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc78f59fb2e5"
	I1001 12:34:05.024230    4721 logs.go:123] Gathering logs for kube-scheduler [c952b19735c2] ...
	I1001 12:34:05.024242    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c952b19735c2"
	I1001 12:34:05.035874    4721 logs.go:123] Gathering logs for kube-proxy [2cdb05dca894] ...
	I1001 12:34:05.035889    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cdb05dca894"
	I1001 12:34:05.047588    4721 logs.go:123] Gathering logs for kube-controller-manager [ecbe68f7a6b4] ...
	I1001 12:34:05.047600    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecbe68f7a6b4"
	I1001 12:34:05.065465    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:34:05.065482    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:34:05.092623    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:34:05.092635    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:34:05.103922    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:34:05.103935    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:34:05.141543    4721 logs.go:123] Gathering logs for etcd [316e5a1a5aed] ...
	I1001 12:34:05.141554    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e5a1a5aed"
	I1001 12:34:05.158182    4721 logs.go:123] Gathering logs for coredns [d04375a2ee30] ...
	I1001 12:34:05.158192    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04375a2ee30"
	I1001 12:34:05.169309    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:34:05.169319    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:34:05.207861    4721 logs.go:123] Gathering logs for etcd [4d0f920ec84f] ...
	I1001 12:34:05.207873    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d0f920ec84f"
	I1001 12:34:05.221820    4721 logs.go:123] Gathering logs for kube-scheduler [7ad38fcc33d6] ...
	I1001 12:34:05.221834    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad38fcc33d6"
	I1001 12:34:07.737689    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:34:12.738784    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:34:12.738982    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:34:12.754628    4721 logs.go:276] 2 containers: [956404de281e bc78f59fb2e5]
	I1001 12:34:12.754717    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:34:12.765445    4721 logs.go:276] 2 containers: [4d0f920ec84f 316e5a1a5aed]
	I1001 12:34:12.765534    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:34:12.776164    4721 logs.go:276] 1 containers: [d04375a2ee30]
	I1001 12:34:12.776240    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:34:12.787422    4721 logs.go:276] 2 containers: [c952b19735c2 7ad38fcc33d6]
	I1001 12:34:12.787515    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:34:12.797987    4721 logs.go:276] 1 containers: [2cdb05dca894]
	I1001 12:34:12.798076    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:34:12.808264    4721 logs.go:276] 2 containers: [ecbe68f7a6b4 d9956cf09477]
	I1001 12:34:12.808343    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:34:12.818383    4721 logs.go:276] 0 containers: []
	W1001 12:34:12.818396    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:34:12.818469    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:34:12.828516    4721 logs.go:276] 1 containers: [5cc1ba08286c]
	I1001 12:34:12.828535    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:34:12.828540    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:34:12.854058    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:34:12.854064    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:34:12.889506    4721 logs.go:123] Gathering logs for etcd [4d0f920ec84f] ...
	I1001 12:34:12.889517    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d0f920ec84f"
	I1001 12:34:12.904385    4721 logs.go:123] Gathering logs for kube-controller-manager [ecbe68f7a6b4] ...
	I1001 12:34:12.904395    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecbe68f7a6b4"
	I1001 12:34:12.921449    4721 logs.go:123] Gathering logs for coredns [d04375a2ee30] ...
	I1001 12:34:12.921459    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04375a2ee30"
	I1001 12:34:12.932968    4721 logs.go:123] Gathering logs for kube-scheduler [c952b19735c2] ...
	I1001 12:34:12.932978    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c952b19735c2"
	I1001 12:34:12.945502    4721 logs.go:123] Gathering logs for kube-apiserver [bc78f59fb2e5] ...
	I1001 12:34:12.945516    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc78f59fb2e5"
	I1001 12:34:12.986358    4721 logs.go:123] Gathering logs for kube-scheduler [7ad38fcc33d6] ...
	I1001 12:34:12.986371    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad38fcc33d6"
	I1001 12:34:13.005421    4721 logs.go:123] Gathering logs for kube-controller-manager [d9956cf09477] ...
	I1001 12:34:13.005434    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9956cf09477"
	I1001 12:34:13.017464    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:34:13.017479    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:34:13.029363    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:34:13.029374    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:34:13.065520    4721 logs.go:123] Gathering logs for kube-apiserver [956404de281e] ...
	I1001 12:34:13.065529    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 956404de281e"
	I1001 12:34:13.084876    4721 logs.go:123] Gathering logs for kube-proxy [2cdb05dca894] ...
	I1001 12:34:13.084886    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cdb05dca894"
	I1001 12:34:13.096326    4721 logs.go:123] Gathering logs for storage-provisioner [5cc1ba08286c] ...
	I1001 12:34:13.096337    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc1ba08286c"
	I1001 12:34:13.114564    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:34:13.114579    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:34:13.118603    4721 logs.go:123] Gathering logs for etcd [316e5a1a5aed] ...
	I1001 12:34:13.118611    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e5a1a5aed"
	I1001 12:34:15.635440    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:34:20.637108    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:34:20.637609    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:34:20.673268    4721 logs.go:276] 2 containers: [956404de281e bc78f59fb2e5]
	I1001 12:34:20.673490    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:34:20.694833    4721 logs.go:276] 2 containers: [4d0f920ec84f 316e5a1a5aed]
	I1001 12:34:20.694952    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:34:20.709840    4721 logs.go:276] 1 containers: [d04375a2ee30]
	I1001 12:34:20.709938    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:34:20.722922    4721 logs.go:276] 2 containers: [c952b19735c2 7ad38fcc33d6]
	I1001 12:34:20.723014    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:34:20.733680    4721 logs.go:276] 1 containers: [2cdb05dca894]
	I1001 12:34:20.733765    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:34:20.752853    4721 logs.go:276] 2 containers: [ecbe68f7a6b4 d9956cf09477]
	I1001 12:34:20.752942    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:34:20.763457    4721 logs.go:276] 0 containers: []
	W1001 12:34:20.763474    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:34:20.763546    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:34:20.774449    4721 logs.go:276] 1 containers: [5cc1ba08286c]
	I1001 12:34:20.774472    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:34:20.774477    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:34:20.811779    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:34:20.811788    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:34:20.815797    4721 logs.go:123] Gathering logs for kube-scheduler [7ad38fcc33d6] ...
	I1001 12:34:20.815805    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad38fcc33d6"
	I1001 12:34:20.830777    4721 logs.go:123] Gathering logs for kube-proxy [2cdb05dca894] ...
	I1001 12:34:20.830789    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cdb05dca894"
	I1001 12:34:20.842477    4721 logs.go:123] Gathering logs for kube-controller-manager [ecbe68f7a6b4] ...
	I1001 12:34:20.842489    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecbe68f7a6b4"
	I1001 12:34:20.860147    4721 logs.go:123] Gathering logs for etcd [4d0f920ec84f] ...
	I1001 12:34:20.860158    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d0f920ec84f"
	I1001 12:34:20.874038    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:34:20.874048    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:34:20.898191    4721 logs.go:123] Gathering logs for kube-apiserver [956404de281e] ...
	I1001 12:34:20.898200    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 956404de281e"
	I1001 12:34:20.912274    4721 logs.go:123] Gathering logs for etcd [316e5a1a5aed] ...
	I1001 12:34:20.912285    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e5a1a5aed"
	I1001 12:34:20.927295    4721 logs.go:123] Gathering logs for kube-scheduler [c952b19735c2] ...
	I1001 12:34:20.927306    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c952b19735c2"
	I1001 12:34:20.938602    4721 logs.go:123] Gathering logs for kube-controller-manager [d9956cf09477] ...
	I1001 12:34:20.938618    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9956cf09477"
	I1001 12:34:20.950733    4721 logs.go:123] Gathering logs for storage-provisioner [5cc1ba08286c] ...
	I1001 12:34:20.950743    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc1ba08286c"
	I1001 12:34:20.961905    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:34:20.961920    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:34:20.999104    4721 logs.go:123] Gathering logs for kube-apiserver [bc78f59fb2e5] ...
	I1001 12:34:20.999116    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc78f59fb2e5"
	I1001 12:34:21.037653    4721 logs.go:123] Gathering logs for coredns [d04375a2ee30] ...
	I1001 12:34:21.037665    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04375a2ee30"
	I1001 12:34:21.048855    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:34:21.048871    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:34:23.563072    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:34:28.564939    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:34:28.565234    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:34:28.591766    4721 logs.go:276] 2 containers: [956404de281e bc78f59fb2e5]
	I1001 12:34:28.591923    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:34:28.609188    4721 logs.go:276] 2 containers: [4d0f920ec84f 316e5a1a5aed]
	I1001 12:34:28.609277    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:34:28.622228    4721 logs.go:276] 1 containers: [d04375a2ee30]
	I1001 12:34:28.622332    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:34:28.633955    4721 logs.go:276] 2 containers: [c952b19735c2 7ad38fcc33d6]
	I1001 12:34:28.634036    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:34:28.644569    4721 logs.go:276] 1 containers: [2cdb05dca894]
	I1001 12:34:28.644642    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:34:28.658945    4721 logs.go:276] 2 containers: [ecbe68f7a6b4 d9956cf09477]
	I1001 12:34:28.659020    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:34:28.669393    4721 logs.go:276] 0 containers: []
	W1001 12:34:28.669407    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:34:28.669476    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:34:28.685243    4721 logs.go:276] 1 containers: [5cc1ba08286c]
	I1001 12:34:28.685261    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:34:28.685267    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:34:28.689951    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:34:28.689962    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:34:28.725055    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:34:28.725066    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:34:28.737424    4721 logs.go:123] Gathering logs for kube-apiserver [956404de281e] ...
	I1001 12:34:28.737435    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 956404de281e"
	I1001 12:34:28.756841    4721 logs.go:123] Gathering logs for kube-apiserver [bc78f59fb2e5] ...
	I1001 12:34:28.756852    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc78f59fb2e5"
	I1001 12:34:28.795034    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:34:28.795045    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:34:28.832230    4721 logs.go:123] Gathering logs for etcd [316e5a1a5aed] ...
	I1001 12:34:28.832244    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e5a1a5aed"
	I1001 12:34:28.846710    4721 logs.go:123] Gathering logs for kube-scheduler [c952b19735c2] ...
	I1001 12:34:28.846722    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c952b19735c2"
	I1001 12:34:28.858603    4721 logs.go:123] Gathering logs for kube-controller-manager [ecbe68f7a6b4] ...
	I1001 12:34:28.858613    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecbe68f7a6b4"
	I1001 12:34:28.876585    4721 logs.go:123] Gathering logs for kube-controller-manager [d9956cf09477] ...
	I1001 12:34:28.876595    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9956cf09477"
	I1001 12:34:28.888473    4721 logs.go:123] Gathering logs for storage-provisioner [5cc1ba08286c] ...
	I1001 12:34:28.888483    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc1ba08286c"
	I1001 12:34:28.903753    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:34:28.903763    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:34:28.927141    4721 logs.go:123] Gathering logs for etcd [4d0f920ec84f] ...
	I1001 12:34:28.927149    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d0f920ec84f"
	I1001 12:34:28.940613    4721 logs.go:123] Gathering logs for coredns [d04375a2ee30] ...
	I1001 12:34:28.940627    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04375a2ee30"
	I1001 12:34:28.951664    4721 logs.go:123] Gathering logs for kube-scheduler [7ad38fcc33d6] ...
	I1001 12:34:28.951676    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad38fcc33d6"
	I1001 12:34:28.966193    4721 logs.go:123] Gathering logs for kube-proxy [2cdb05dca894] ...
	I1001 12:34:28.966203    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cdb05dca894"
	I1001 12:34:31.484681    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:34:36.486768    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:34:36.487292    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:34:36.525019    4721 logs.go:276] 2 containers: [956404de281e bc78f59fb2e5]
	I1001 12:34:36.525185    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:34:36.544001    4721 logs.go:276] 2 containers: [4d0f920ec84f 316e5a1a5aed]
	I1001 12:34:36.544129    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:34:36.557991    4721 logs.go:276] 1 containers: [d04375a2ee30]
	I1001 12:34:36.558091    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:34:36.570349    4721 logs.go:276] 2 containers: [c952b19735c2 7ad38fcc33d6]
	I1001 12:34:36.570440    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:34:36.580945    4721 logs.go:276] 1 containers: [2cdb05dca894]
	I1001 12:34:36.581028    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:34:36.597638    4721 logs.go:276] 2 containers: [ecbe68f7a6b4 d9956cf09477]
	I1001 12:34:36.597717    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:34:36.608312    4721 logs.go:276] 0 containers: []
	W1001 12:34:36.608325    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:34:36.608392    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:34:36.620245    4721 logs.go:276] 1 containers: [5cc1ba08286c]
	I1001 12:34:36.620264    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:34:36.620269    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:34:36.657851    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:34:36.657866    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:34:36.696894    4721 logs.go:123] Gathering logs for kube-apiserver [956404de281e] ...
	I1001 12:34:36.696909    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 956404de281e"
	I1001 12:34:36.722223    4721 logs.go:123] Gathering logs for coredns [d04375a2ee30] ...
	I1001 12:34:36.722234    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04375a2ee30"
	I1001 12:34:36.741242    4721 logs.go:123] Gathering logs for kube-proxy [2cdb05dca894] ...
	I1001 12:34:36.741253    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cdb05dca894"
	I1001 12:34:36.753552    4721 logs.go:123] Gathering logs for kube-controller-manager [d9956cf09477] ...
	I1001 12:34:36.753568    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9956cf09477"
	I1001 12:34:36.766637    4721 logs.go:123] Gathering logs for storage-provisioner [5cc1ba08286c] ...
	I1001 12:34:36.766651    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc1ba08286c"
	I1001 12:34:36.778455    4721 logs.go:123] Gathering logs for kube-apiserver [bc78f59fb2e5] ...
	I1001 12:34:36.778466    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc78f59fb2e5"
	I1001 12:34:36.816354    4721 logs.go:123] Gathering logs for etcd [4d0f920ec84f] ...
	I1001 12:34:36.816365    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d0f920ec84f"
	I1001 12:34:36.830438    4721 logs.go:123] Gathering logs for kube-scheduler [c952b19735c2] ...
	I1001 12:34:36.830479    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c952b19735c2"
	I1001 12:34:36.841882    4721 logs.go:123] Gathering logs for kube-scheduler [7ad38fcc33d6] ...
	I1001 12:34:36.841892    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad38fcc33d6"
	I1001 12:34:36.856557    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:34:36.856572    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:34:36.868814    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:34:36.868827    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:34:36.873395    4721 logs.go:123] Gathering logs for etcd [316e5a1a5aed] ...
	I1001 12:34:36.873404    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e5a1a5aed"
	I1001 12:34:36.888659    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:34:36.888673    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:34:36.914058    4721 logs.go:123] Gathering logs for kube-controller-manager [ecbe68f7a6b4] ...
	I1001 12:34:36.914066    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecbe68f7a6b4"
	I1001 12:34:39.433564    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:34:44.433826    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:34:44.433988    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:34:44.445830    4721 logs.go:276] 2 containers: [956404de281e bc78f59fb2e5]
	I1001 12:34:44.445916    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:34:44.456787    4721 logs.go:276] 2 containers: [4d0f920ec84f 316e5a1a5aed]
	I1001 12:34:44.456870    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:34:44.469204    4721 logs.go:276] 1 containers: [d04375a2ee30]
	I1001 12:34:44.469293    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:34:44.479321    4721 logs.go:276] 2 containers: [c952b19735c2 7ad38fcc33d6]
	I1001 12:34:44.479411    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:34:44.489397    4721 logs.go:276] 1 containers: [2cdb05dca894]
	I1001 12:34:44.489482    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:34:44.499912    4721 logs.go:276] 2 containers: [ecbe68f7a6b4 d9956cf09477]
	I1001 12:34:44.499994    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:34:44.510074    4721 logs.go:276] 0 containers: []
	W1001 12:34:44.510087    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:34:44.510162    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:34:44.520703    4721 logs.go:276] 1 containers: [5cc1ba08286c]
	I1001 12:34:44.520722    4721 logs.go:123] Gathering logs for storage-provisioner [5cc1ba08286c] ...
	I1001 12:34:44.520728    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc1ba08286c"
	I1001 12:34:44.532387    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:34:44.532399    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:34:44.544229    4721 logs.go:123] Gathering logs for kube-apiserver [bc78f59fb2e5] ...
	I1001 12:34:44.544240    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc78f59fb2e5"
	I1001 12:34:44.581795    4721 logs.go:123] Gathering logs for etcd [316e5a1a5aed] ...
	I1001 12:34:44.581806    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e5a1a5aed"
	I1001 12:34:44.595943    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:34:44.595954    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:34:44.633667    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:34:44.633677    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:34:44.668961    4721 logs.go:123] Gathering logs for etcd [4d0f920ec84f] ...
	I1001 12:34:44.668973    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d0f920ec84f"
	I1001 12:34:44.682894    4721 logs.go:123] Gathering logs for coredns [d04375a2ee30] ...
	I1001 12:34:44.682903    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04375a2ee30"
	I1001 12:34:44.693898    4721 logs.go:123] Gathering logs for kube-scheduler [c952b19735c2] ...
	I1001 12:34:44.693910    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c952b19735c2"
	I1001 12:34:44.707065    4721 logs.go:123] Gathering logs for kube-proxy [2cdb05dca894] ...
	I1001 12:34:44.707077    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cdb05dca894"
	I1001 12:34:44.719761    4721 logs.go:123] Gathering logs for kube-controller-manager [d9956cf09477] ...
	I1001 12:34:44.719773    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9956cf09477"
	I1001 12:34:44.732499    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:34:44.732508    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:34:44.736879    4721 logs.go:123] Gathering logs for kube-scheduler [7ad38fcc33d6] ...
	I1001 12:34:44.736886    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad38fcc33d6"
	I1001 12:34:44.751723    4721 logs.go:123] Gathering logs for kube-controller-manager [ecbe68f7a6b4] ...
	I1001 12:34:44.751737    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecbe68f7a6b4"
	I1001 12:34:44.769467    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:34:44.769476    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:34:44.794167    4721 logs.go:123] Gathering logs for kube-apiserver [956404de281e] ...
	I1001 12:34:44.794181    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 956404de281e"
	I1001 12:34:47.311287    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:34:52.313707    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:34:52.313911    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:34:52.327920    4721 logs.go:276] 2 containers: [956404de281e bc78f59fb2e5]
	I1001 12:34:52.328023    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:34:52.345898    4721 logs.go:276] 2 containers: [4d0f920ec84f 316e5a1a5aed]
	I1001 12:34:52.345973    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:34:52.356825    4721 logs.go:276] 1 containers: [d04375a2ee30]
	I1001 12:34:52.356898    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:34:52.368024    4721 logs.go:276] 2 containers: [c952b19735c2 7ad38fcc33d6]
	I1001 12:34:52.368108    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:34:52.378681    4721 logs.go:276] 1 containers: [2cdb05dca894]
	I1001 12:34:52.378769    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:34:52.389901    4721 logs.go:276] 2 containers: [ecbe68f7a6b4 d9956cf09477]
	I1001 12:34:52.389991    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:34:52.404922    4721 logs.go:276] 0 containers: []
	W1001 12:34:52.404936    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:34:52.405007    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:34:52.415857    4721 logs.go:276] 1 containers: [5cc1ba08286c]
	I1001 12:34:52.415873    4721 logs.go:123] Gathering logs for etcd [4d0f920ec84f] ...
	I1001 12:34:52.415879    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d0f920ec84f"
	I1001 12:34:52.429384    4721 logs.go:123] Gathering logs for etcd [316e5a1a5aed] ...
	I1001 12:34:52.429394    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e5a1a5aed"
	I1001 12:34:52.444408    4721 logs.go:123] Gathering logs for kube-proxy [2cdb05dca894] ...
	I1001 12:34:52.444418    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cdb05dca894"
	I1001 12:34:52.455977    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:34:52.455990    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:34:52.467626    4721 logs.go:123] Gathering logs for kube-apiserver [956404de281e] ...
	I1001 12:34:52.467638    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 956404de281e"
	I1001 12:34:52.481891    4721 logs.go:123] Gathering logs for kube-scheduler [7ad38fcc33d6] ...
	I1001 12:34:52.481902    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad38fcc33d6"
	I1001 12:34:52.496429    4721 logs.go:123] Gathering logs for kube-controller-manager [ecbe68f7a6b4] ...
	I1001 12:34:52.496444    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecbe68f7a6b4"
	I1001 12:34:52.514994    4721 logs.go:123] Gathering logs for kube-controller-manager [d9956cf09477] ...
	I1001 12:34:52.515011    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9956cf09477"
	I1001 12:34:52.529032    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:34:52.529043    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:34:52.553615    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:34:52.553624    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:34:52.557863    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:34:52.557868    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:34:52.592769    4721 logs.go:123] Gathering logs for kube-apiserver [bc78f59fb2e5] ...
	I1001 12:34:52.592781    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc78f59fb2e5"
	I1001 12:34:52.630706    4721 logs.go:123] Gathering logs for coredns [d04375a2ee30] ...
	I1001 12:34:52.630724    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04375a2ee30"
	I1001 12:34:52.643168    4721 logs.go:123] Gathering logs for kube-scheduler [c952b19735c2] ...
	I1001 12:34:52.643181    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c952b19735c2"
	I1001 12:34:52.659572    4721 logs.go:123] Gathering logs for storage-provisioner [5cc1ba08286c] ...
	I1001 12:34:52.659588    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc1ba08286c"
	I1001 12:34:52.672133    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:34:52.672150    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:34:55.213271    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:35:00.214650    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:35:00.214931    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:35:00.232934    4721 logs.go:276] 2 containers: [956404de281e bc78f59fb2e5]
	I1001 12:35:00.233045    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:35:00.247110    4721 logs.go:276] 2 containers: [4d0f920ec84f 316e5a1a5aed]
	I1001 12:35:00.247195    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:35:00.259003    4721 logs.go:276] 1 containers: [d04375a2ee30]
	I1001 12:35:00.259083    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:35:00.269396    4721 logs.go:276] 2 containers: [c952b19735c2 7ad38fcc33d6]
	I1001 12:35:00.269481    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:35:00.279783    4721 logs.go:276] 1 containers: [2cdb05dca894]
	I1001 12:35:00.279854    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:35:00.290356    4721 logs.go:276] 2 containers: [ecbe68f7a6b4 d9956cf09477]
	I1001 12:35:00.290443    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:35:00.301689    4721 logs.go:276] 0 containers: []
	W1001 12:35:00.301706    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:35:00.301779    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:35:00.311997    4721 logs.go:276] 1 containers: [5cc1ba08286c]
	I1001 12:35:00.312015    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:35:00.312020    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:35:00.316656    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:35:00.316664    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:35:00.351081    4721 logs.go:123] Gathering logs for kube-apiserver [bc78f59fb2e5] ...
	I1001 12:35:00.351097    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc78f59fb2e5"
	I1001 12:35:00.388905    4721 logs.go:123] Gathering logs for kube-scheduler [c952b19735c2] ...
	I1001 12:35:00.388918    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c952b19735c2"
	I1001 12:35:00.401141    4721 logs.go:123] Gathering logs for kube-controller-manager [ecbe68f7a6b4] ...
	I1001 12:35:00.401153    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecbe68f7a6b4"
	I1001 12:35:00.418123    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:35:00.418147    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:35:00.455880    4721 logs.go:123] Gathering logs for kube-controller-manager [d9956cf09477] ...
	I1001 12:35:00.455888    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9956cf09477"
	I1001 12:35:00.468106    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:35:00.468118    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:35:00.487574    4721 logs.go:123] Gathering logs for kube-apiserver [956404de281e] ...
	I1001 12:35:00.487587    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 956404de281e"
	I1001 12:35:00.501467    4721 logs.go:123] Gathering logs for etcd [4d0f920ec84f] ...
	I1001 12:35:00.501476    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d0f920ec84f"
	I1001 12:35:00.520881    4721 logs.go:123] Gathering logs for etcd [316e5a1a5aed] ...
	I1001 12:35:00.520893    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e5a1a5aed"
	I1001 12:35:00.535389    4721 logs.go:123] Gathering logs for coredns [d04375a2ee30] ...
	I1001 12:35:00.535400    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04375a2ee30"
	I1001 12:35:00.546557    4721 logs.go:123] Gathering logs for kube-scheduler [7ad38fcc33d6] ...
	I1001 12:35:00.546570    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad38fcc33d6"
	I1001 12:35:00.561042    4721 logs.go:123] Gathering logs for kube-proxy [2cdb05dca894] ...
	I1001 12:35:00.561058    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cdb05dca894"
	I1001 12:35:00.575395    4721 logs.go:123] Gathering logs for storage-provisioner [5cc1ba08286c] ...
	I1001 12:35:00.575405    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc1ba08286c"
	I1001 12:35:00.586633    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:35:00.586643    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:35:03.111410    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:35:08.113593    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:35:08.113862    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:35:08.137900    4721 logs.go:276] 2 containers: [956404de281e bc78f59fb2e5]
	I1001 12:35:08.138035    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:35:08.156586    4721 logs.go:276] 2 containers: [4d0f920ec84f 316e5a1a5aed]
	I1001 12:35:08.156684    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:35:08.170342    4721 logs.go:276] 1 containers: [d04375a2ee30]
	I1001 12:35:08.170432    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:35:08.181408    4721 logs.go:276] 2 containers: [c952b19735c2 7ad38fcc33d6]
	I1001 12:35:08.181488    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:35:08.193116    4721 logs.go:276] 1 containers: [2cdb05dca894]
	I1001 12:35:08.193207    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:35:08.204665    4721 logs.go:276] 2 containers: [ecbe68f7a6b4 d9956cf09477]
	I1001 12:35:08.204749    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:35:08.214899    4721 logs.go:276] 0 containers: []
	W1001 12:35:08.214913    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:35:08.214991    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:35:08.226214    4721 logs.go:276] 1 containers: [5cc1ba08286c]
	I1001 12:35:08.226234    4721 logs.go:123] Gathering logs for kube-proxy [2cdb05dca894] ...
	I1001 12:35:08.226240    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cdb05dca894"
	I1001 12:35:08.238229    4721 logs.go:123] Gathering logs for kube-scheduler [c952b19735c2] ...
	I1001 12:35:08.238240    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c952b19735c2"
	I1001 12:35:08.255649    4721 logs.go:123] Gathering logs for kube-controller-manager [ecbe68f7a6b4] ...
	I1001 12:35:08.255660    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecbe68f7a6b4"
	I1001 12:35:08.273551    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:35:08.273565    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:35:08.297199    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:35:08.297207    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:35:08.308830    4721 logs.go:123] Gathering logs for kube-apiserver [bc78f59fb2e5] ...
	I1001 12:35:08.308840    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc78f59fb2e5"
	I1001 12:35:08.346148    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:35:08.346163    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:35:08.350470    4721 logs.go:123] Gathering logs for kube-apiserver [956404de281e] ...
	I1001 12:35:08.350477    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 956404de281e"
	I1001 12:35:08.373936    4721 logs.go:123] Gathering logs for etcd [316e5a1a5aed] ...
	I1001 12:35:08.373950    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e5a1a5aed"
	I1001 12:35:08.392005    4721 logs.go:123] Gathering logs for kube-controller-manager [d9956cf09477] ...
	I1001 12:35:08.392020    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9956cf09477"
	I1001 12:35:08.407771    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:35:08.407787    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:35:08.444353    4721 logs.go:123] Gathering logs for etcd [4d0f920ec84f] ...
	I1001 12:35:08.444367    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d0f920ec84f"
	I1001 12:35:08.458120    4721 logs.go:123] Gathering logs for coredns [d04375a2ee30] ...
	I1001 12:35:08.458133    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04375a2ee30"
	I1001 12:35:08.468688    4721 logs.go:123] Gathering logs for kube-scheduler [7ad38fcc33d6] ...
	I1001 12:35:08.468700    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad38fcc33d6"
	I1001 12:35:08.485531    4721 logs.go:123] Gathering logs for storage-provisioner [5cc1ba08286c] ...
	I1001 12:35:08.485545    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc1ba08286c"
	I1001 12:35:08.496792    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:35:08.496804    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:35:11.033221    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:35:16.035415    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:35:16.035716    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:35:16.060915    4721 logs.go:276] 2 containers: [956404de281e bc78f59fb2e5]
	I1001 12:35:16.061043    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:35:16.078745    4721 logs.go:276] 2 containers: [4d0f920ec84f 316e5a1a5aed]
	I1001 12:35:16.078880    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:35:16.091401    4721 logs.go:276] 1 containers: [d04375a2ee30]
	I1001 12:35:16.091479    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:35:16.106572    4721 logs.go:276] 2 containers: [c952b19735c2 7ad38fcc33d6]
	I1001 12:35:16.106661    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:35:16.116946    4721 logs.go:276] 1 containers: [2cdb05dca894]
	I1001 12:35:16.117036    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:35:16.127573    4721 logs.go:276] 2 containers: [ecbe68f7a6b4 d9956cf09477]
	I1001 12:35:16.127662    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:35:16.138673    4721 logs.go:276] 0 containers: []
	W1001 12:35:16.138688    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:35:16.138757    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:35:16.150253    4721 logs.go:276] 1 containers: [5cc1ba08286c]
	I1001 12:35:16.150270    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:35:16.150277    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:35:16.188705    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:35:16.188716    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:35:16.192971    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:35:16.192980    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:35:16.228080    4721 logs.go:123] Gathering logs for kube-proxy [2cdb05dca894] ...
	I1001 12:35:16.228094    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cdb05dca894"
	I1001 12:35:16.244706    4721 logs.go:123] Gathering logs for kube-controller-manager [d9956cf09477] ...
	I1001 12:35:16.244718    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9956cf09477"
	I1001 12:35:16.257223    4721 logs.go:123] Gathering logs for kube-apiserver [956404de281e] ...
	I1001 12:35:16.257235    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 956404de281e"
	I1001 12:35:16.271274    4721 logs.go:123] Gathering logs for kube-apiserver [bc78f59fb2e5] ...
	I1001 12:35:16.271288    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc78f59fb2e5"
	I1001 12:35:16.308909    4721 logs.go:123] Gathering logs for kube-scheduler [c952b19735c2] ...
	I1001 12:35:16.308920    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c952b19735c2"
	I1001 12:35:16.325657    4721 logs.go:123] Gathering logs for kube-scheduler [7ad38fcc33d6] ...
	I1001 12:35:16.325668    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad38fcc33d6"
	I1001 12:35:16.340235    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:35:16.340248    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:35:16.352266    4721 logs.go:123] Gathering logs for etcd [4d0f920ec84f] ...
	I1001 12:35:16.352277    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d0f920ec84f"
	I1001 12:35:16.366042    4721 logs.go:123] Gathering logs for etcd [316e5a1a5aed] ...
	I1001 12:35:16.366052    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e5a1a5aed"
	I1001 12:35:16.380869    4721 logs.go:123] Gathering logs for kube-controller-manager [ecbe68f7a6b4] ...
	I1001 12:35:16.380880    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecbe68f7a6b4"
	I1001 12:35:16.401892    4721 logs.go:123] Gathering logs for coredns [d04375a2ee30] ...
	I1001 12:35:16.401903    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04375a2ee30"
	I1001 12:35:16.413167    4721 logs.go:123] Gathering logs for storage-provisioner [5cc1ba08286c] ...
	I1001 12:35:16.413179    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc1ba08286c"
	I1001 12:35:16.424267    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:35:16.424277    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:35:18.948665    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:35:23.950591    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:35:23.951122    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:35:23.993515    4721 logs.go:276] 2 containers: [956404de281e bc78f59fb2e5]
	I1001 12:35:23.993680    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:35:24.023152    4721 logs.go:276] 2 containers: [4d0f920ec84f 316e5a1a5aed]
	I1001 12:35:24.023257    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:35:24.035726    4721 logs.go:276] 1 containers: [d04375a2ee30]
	I1001 12:35:24.035810    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:35:24.049887    4721 logs.go:276] 2 containers: [c952b19735c2 7ad38fcc33d6]
	I1001 12:35:24.049971    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:35:24.060636    4721 logs.go:276] 1 containers: [2cdb05dca894]
	I1001 12:35:24.060747    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:35:24.071174    4721 logs.go:276] 2 containers: [ecbe68f7a6b4 d9956cf09477]
	I1001 12:35:24.071260    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:35:24.081672    4721 logs.go:276] 0 containers: []
	W1001 12:35:24.081690    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:35:24.081763    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:35:24.092344    4721 logs.go:276] 1 containers: [5cc1ba08286c]
	I1001 12:35:24.092361    4721 logs.go:123] Gathering logs for kube-scheduler [c952b19735c2] ...
	I1001 12:35:24.092366    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c952b19735c2"
	I1001 12:35:24.112213    4721 logs.go:123] Gathering logs for kube-scheduler [7ad38fcc33d6] ...
	I1001 12:35:24.112225    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad38fcc33d6"
	I1001 12:35:24.127483    4721 logs.go:123] Gathering logs for kube-controller-manager [ecbe68f7a6b4] ...
	I1001 12:35:24.127497    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecbe68f7a6b4"
	I1001 12:35:24.150418    4721 logs.go:123] Gathering logs for kube-controller-manager [d9956cf09477] ...
	I1001 12:35:24.150430    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9956cf09477"
	I1001 12:35:24.163620    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:35:24.163637    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:35:24.167872    4721 logs.go:123] Gathering logs for kube-apiserver [956404de281e] ...
	I1001 12:35:24.167882    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 956404de281e"
	I1001 12:35:24.182534    4721 logs.go:123] Gathering logs for kube-apiserver [bc78f59fb2e5] ...
	I1001 12:35:24.182546    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc78f59fb2e5"
	I1001 12:35:24.222453    4721 logs.go:123] Gathering logs for etcd [4d0f920ec84f] ...
	I1001 12:35:24.222467    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d0f920ec84f"
	I1001 12:35:24.236151    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:35:24.236161    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:35:24.272083    4721 logs.go:123] Gathering logs for coredns [d04375a2ee30] ...
	I1001 12:35:24.272100    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04375a2ee30"
	I1001 12:35:24.283827    4721 logs.go:123] Gathering logs for kube-proxy [2cdb05dca894] ...
	I1001 12:35:24.283839    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cdb05dca894"
	I1001 12:35:24.295704    4721 logs.go:123] Gathering logs for storage-provisioner [5cc1ba08286c] ...
	I1001 12:35:24.295718    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc1ba08286c"
	I1001 12:35:24.307108    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:35:24.307121    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:35:24.342812    4721 logs.go:123] Gathering logs for etcd [316e5a1a5aed] ...
	I1001 12:35:24.342819    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e5a1a5aed"
	I1001 12:35:24.356730    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:35:24.356742    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:35:24.380217    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:35:24.380227    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:35:26.894855    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:35:31.897269    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:35:31.897651    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:35:31.929185    4721 logs.go:276] 2 containers: [956404de281e bc78f59fb2e5]
	I1001 12:35:31.929348    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:35:31.948202    4721 logs.go:276] 2 containers: [4d0f920ec84f 316e5a1a5aed]
	I1001 12:35:31.948318    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:35:31.962926    4721 logs.go:276] 1 containers: [d04375a2ee30]
	I1001 12:35:31.963024    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:35:31.975511    4721 logs.go:276] 2 containers: [c952b19735c2 7ad38fcc33d6]
	I1001 12:35:31.975595    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:35:31.986105    4721 logs.go:276] 1 containers: [2cdb05dca894]
	I1001 12:35:31.986193    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:35:31.996975    4721 logs.go:276] 2 containers: [ecbe68f7a6b4 d9956cf09477]
	I1001 12:35:31.997057    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:35:32.014713    4721 logs.go:276] 0 containers: []
	W1001 12:35:32.014725    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:35:32.014803    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:35:32.026881    4721 logs.go:276] 1 containers: [5cc1ba08286c]
	I1001 12:35:32.026900    4721 logs.go:123] Gathering logs for kube-proxy [2cdb05dca894] ...
	I1001 12:35:32.026906    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cdb05dca894"
	I1001 12:35:32.038622    4721 logs.go:123] Gathering logs for kube-controller-manager [d9956cf09477] ...
	I1001 12:35:32.038637    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9956cf09477"
	I1001 12:35:32.052363    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:35:32.052375    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:35:32.065302    4721 logs.go:123] Gathering logs for kube-apiserver [bc78f59fb2e5] ...
	I1001 12:35:32.065314    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc78f59fb2e5"
	I1001 12:35:32.110744    4721 logs.go:123] Gathering logs for etcd [4d0f920ec84f] ...
	I1001 12:35:32.110757    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d0f920ec84f"
	I1001 12:35:32.124791    4721 logs.go:123] Gathering logs for etcd [316e5a1a5aed] ...
	I1001 12:35:32.124805    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e5a1a5aed"
	I1001 12:35:32.139196    4721 logs.go:123] Gathering logs for kube-scheduler [7ad38fcc33d6] ...
	I1001 12:35:32.139212    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad38fcc33d6"
	I1001 12:35:32.154410    4721 logs.go:123] Gathering logs for kube-controller-manager [ecbe68f7a6b4] ...
	I1001 12:35:32.154422    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecbe68f7a6b4"
	I1001 12:35:32.172014    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:35:32.172024    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:35:32.211613    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:35:32.211624    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:35:32.236201    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:35:32.236211    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:35:32.270943    4721 logs.go:123] Gathering logs for kube-apiserver [956404de281e] ...
	I1001 12:35:32.270955    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 956404de281e"
	I1001 12:35:32.284957    4721 logs.go:123] Gathering logs for coredns [d04375a2ee30] ...
	I1001 12:35:32.284972    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04375a2ee30"
	I1001 12:35:32.297617    4721 logs.go:123] Gathering logs for kube-scheduler [c952b19735c2] ...
	I1001 12:35:32.297631    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c952b19735c2"
	I1001 12:35:32.309549    4721 logs.go:123] Gathering logs for storage-provisioner [5cc1ba08286c] ...
	I1001 12:35:32.309562    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc1ba08286c"
	I1001 12:35:32.321455    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:35:32.321470    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:35:34.826636    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:35:39.829166    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:35:39.829331    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:35:39.842208    4721 logs.go:276] 2 containers: [956404de281e bc78f59fb2e5]
	I1001 12:35:39.842299    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:35:39.853734    4721 logs.go:276] 2 containers: [4d0f920ec84f 316e5a1a5aed]
	I1001 12:35:39.853821    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:35:39.867595    4721 logs.go:276] 1 containers: [d04375a2ee30]
	I1001 12:35:39.867684    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:35:39.877663    4721 logs.go:276] 2 containers: [c952b19735c2 7ad38fcc33d6]
	I1001 12:35:39.877746    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:35:39.892947    4721 logs.go:276] 1 containers: [2cdb05dca894]
	I1001 12:35:39.893035    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:35:39.903517    4721 logs.go:276] 2 containers: [ecbe68f7a6b4 d9956cf09477]
	I1001 12:35:39.903595    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:35:39.913636    4721 logs.go:276] 0 containers: []
	W1001 12:35:39.913649    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:35:39.913727    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:35:39.924021    4721 logs.go:276] 1 containers: [5cc1ba08286c]
	I1001 12:35:39.924041    4721 logs.go:123] Gathering logs for kube-apiserver [956404de281e] ...
	I1001 12:35:39.924046    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 956404de281e"
	I1001 12:35:39.939071    4721 logs.go:123] Gathering logs for etcd [4d0f920ec84f] ...
	I1001 12:35:39.939087    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d0f920ec84f"
	I1001 12:35:39.959182    4721 logs.go:123] Gathering logs for kube-controller-manager [ecbe68f7a6b4] ...
	I1001 12:35:39.959193    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecbe68f7a6b4"
	I1001 12:35:39.976076    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:35:39.976091    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:35:39.980534    4721 logs.go:123] Gathering logs for coredns [d04375a2ee30] ...
	I1001 12:35:39.980540    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04375a2ee30"
	I1001 12:35:39.991657    4721 logs.go:123] Gathering logs for kube-scheduler [c952b19735c2] ...
	I1001 12:35:39.991668    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c952b19735c2"
	I1001 12:35:40.003113    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:35:40.003127    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:35:40.026964    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:35:40.026973    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:35:40.061999    4721 logs.go:123] Gathering logs for etcd [316e5a1a5aed] ...
	I1001 12:35:40.062013    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e5a1a5aed"
	I1001 12:35:40.076816    4721 logs.go:123] Gathering logs for kube-scheduler [7ad38fcc33d6] ...
	I1001 12:35:40.076829    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad38fcc33d6"
	I1001 12:35:40.092032    4721 logs.go:123] Gathering logs for kube-proxy [2cdb05dca894] ...
	I1001 12:35:40.092048    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cdb05dca894"
	I1001 12:35:40.103752    4721 logs.go:123] Gathering logs for kube-controller-manager [d9956cf09477] ...
	I1001 12:35:40.103762    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9956cf09477"
	I1001 12:35:40.115956    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:35:40.115967    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:35:40.134745    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:35:40.134761    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:35:40.173793    4721 logs.go:123] Gathering logs for kube-apiserver [bc78f59fb2e5] ...
	I1001 12:35:40.173813    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc78f59fb2e5"
	I1001 12:35:40.217059    4721 logs.go:123] Gathering logs for storage-provisioner [5cc1ba08286c] ...
	I1001 12:35:40.217075    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc1ba08286c"
	I1001 12:35:42.729941    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:35:47.732080    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:35:47.732304    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:35:47.752330    4721 logs.go:276] 2 containers: [956404de281e bc78f59fb2e5]
	I1001 12:35:47.752450    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:35:47.767310    4721 logs.go:276] 2 containers: [4d0f920ec84f 316e5a1a5aed]
	I1001 12:35:47.767409    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:35:47.778799    4721 logs.go:276] 1 containers: [d04375a2ee30]
	I1001 12:35:47.778882    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:35:47.789049    4721 logs.go:276] 2 containers: [c952b19735c2 7ad38fcc33d6]
	I1001 12:35:47.789124    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:35:47.802149    4721 logs.go:276] 1 containers: [2cdb05dca894]
	I1001 12:35:47.802233    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:35:47.819798    4721 logs.go:276] 2 containers: [ecbe68f7a6b4 d9956cf09477]
	I1001 12:35:47.819875    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:35:47.834491    4721 logs.go:276] 0 containers: []
	W1001 12:35:47.834501    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:35:47.834566    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:35:47.844701    4721 logs.go:276] 1 containers: [5cc1ba08286c]
	I1001 12:35:47.844719    4721 logs.go:123] Gathering logs for coredns [d04375a2ee30] ...
	I1001 12:35:47.844725    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04375a2ee30"
	I1001 12:35:47.856064    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:35:47.856075    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:35:47.879037    4721 logs.go:123] Gathering logs for kube-apiserver [956404de281e] ...
	I1001 12:35:47.879044    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 956404de281e"
	I1001 12:35:47.893982    4721 logs.go:123] Gathering logs for kube-scheduler [7ad38fcc33d6] ...
	I1001 12:35:47.893993    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad38fcc33d6"
	I1001 12:35:47.909296    4721 logs.go:123] Gathering logs for kube-proxy [2cdb05dca894] ...
	I1001 12:35:47.909306    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cdb05dca894"
	I1001 12:35:47.921099    4721 logs.go:123] Gathering logs for kube-controller-manager [d9956cf09477] ...
	I1001 12:35:47.921110    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9956cf09477"
	I1001 12:35:47.933373    4721 logs.go:123] Gathering logs for storage-provisioner [5cc1ba08286c] ...
	I1001 12:35:47.933386    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc1ba08286c"
	I1001 12:35:47.944750    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:35:47.944762    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:35:47.981264    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:35:47.981272    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:35:48.018666    4721 logs.go:123] Gathering logs for kube-controller-manager [ecbe68f7a6b4] ...
	I1001 12:35:48.018676    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecbe68f7a6b4"
	I1001 12:35:48.037797    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:35:48.037809    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:35:48.049506    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:35:48.049518    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:35:48.053597    4721 logs.go:123] Gathering logs for kube-apiserver [bc78f59fb2e5] ...
	I1001 12:35:48.053604    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc78f59fb2e5"
	I1001 12:35:48.092364    4721 logs.go:123] Gathering logs for etcd [4d0f920ec84f] ...
	I1001 12:35:48.092382    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d0f920ec84f"
	I1001 12:35:48.114216    4721 logs.go:123] Gathering logs for etcd [316e5a1a5aed] ...
	I1001 12:35:48.114229    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e5a1a5aed"
	I1001 12:35:48.130170    4721 logs.go:123] Gathering logs for kube-scheduler [c952b19735c2] ...
	I1001 12:35:48.130182    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c952b19735c2"
	I1001 12:35:50.645264    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:35:55.647073    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:35:55.647348    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:35:55.668443    4721 logs.go:276] 2 containers: [956404de281e bc78f59fb2e5]
	I1001 12:35:55.668572    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:35:55.683883    4721 logs.go:276] 2 containers: [4d0f920ec84f 316e5a1a5aed]
	I1001 12:35:55.683987    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:35:55.696513    4721 logs.go:276] 1 containers: [d04375a2ee30]
	I1001 12:35:55.696598    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:35:55.709055    4721 logs.go:276] 2 containers: [c952b19735c2 7ad38fcc33d6]
	I1001 12:35:55.709147    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:35:55.719618    4721 logs.go:276] 1 containers: [2cdb05dca894]
	I1001 12:35:55.719710    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:35:55.730533    4721 logs.go:276] 2 containers: [ecbe68f7a6b4 d9956cf09477]
	I1001 12:35:55.730618    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:35:55.740518    4721 logs.go:276] 0 containers: []
	W1001 12:35:55.740533    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:35:55.740610    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:35:55.751330    4721 logs.go:276] 1 containers: [5cc1ba08286c]
	I1001 12:35:55.751347    4721 logs.go:123] Gathering logs for coredns [d04375a2ee30] ...
	I1001 12:35:55.751353    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04375a2ee30"
	I1001 12:35:55.762121    4721 logs.go:123] Gathering logs for kube-scheduler [c952b19735c2] ...
	I1001 12:35:55.762133    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c952b19735c2"
	I1001 12:35:55.773431    4721 logs.go:123] Gathering logs for kube-scheduler [7ad38fcc33d6] ...
	I1001 12:35:55.773444    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad38fcc33d6"
	I1001 12:35:55.788290    4721 logs.go:123] Gathering logs for kube-apiserver [bc78f59fb2e5] ...
	I1001 12:35:55.788300    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc78f59fb2e5"
	I1001 12:35:55.830340    4721 logs.go:123] Gathering logs for kube-controller-manager [d9956cf09477] ...
	I1001 12:35:55.830356    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9956cf09477"
	I1001 12:35:55.842931    4721 logs.go:123] Gathering logs for storage-provisioner [5cc1ba08286c] ...
	I1001 12:35:55.842941    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc1ba08286c"
	I1001 12:35:55.854331    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:35:55.854340    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:35:55.870936    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:35:55.870948    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:35:55.908945    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:35:55.908953    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:35:55.912905    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:35:55.912915    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:35:55.948080    4721 logs.go:123] Gathering logs for kube-apiserver [956404de281e] ...
	I1001 12:35:55.948093    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 956404de281e"
	I1001 12:35:55.963396    4721 logs.go:123] Gathering logs for etcd [316e5a1a5aed] ...
	I1001 12:35:55.963409    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e5a1a5aed"
	I1001 12:35:55.978616    4721 logs.go:123] Gathering logs for kube-proxy [2cdb05dca894] ...
	I1001 12:35:55.978630    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cdb05dca894"
	I1001 12:35:55.994273    4721 logs.go:123] Gathering logs for etcd [4d0f920ec84f] ...
	I1001 12:35:55.994285    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d0f920ec84f"
	I1001 12:35:56.007834    4721 logs.go:123] Gathering logs for kube-controller-manager [ecbe68f7a6b4] ...
	I1001 12:35:56.007851    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecbe68f7a6b4"
	I1001 12:35:56.025664    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:35:56.025676    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:35:58.551537    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:36:03.553964    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:36:03.554217    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:36:03.573170    4721 logs.go:276] 2 containers: [956404de281e bc78f59fb2e5]
	I1001 12:36:03.573286    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:36:03.586920    4721 logs.go:276] 2 containers: [4d0f920ec84f 316e5a1a5aed]
	I1001 12:36:03.587002    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:36:03.599156    4721 logs.go:276] 1 containers: [d04375a2ee30]
	I1001 12:36:03.599251    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:36:03.609482    4721 logs.go:276] 2 containers: [c952b19735c2 7ad38fcc33d6]
	I1001 12:36:03.609562    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:36:03.623377    4721 logs.go:276] 1 containers: [2cdb05dca894]
	I1001 12:36:03.623470    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:36:03.634371    4721 logs.go:276] 2 containers: [ecbe68f7a6b4 d9956cf09477]
	I1001 12:36:03.634453    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:36:03.644847    4721 logs.go:276] 0 containers: []
	W1001 12:36:03.644865    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:36:03.644930    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:36:03.655383    4721 logs.go:276] 1 containers: [5cc1ba08286c]
	I1001 12:36:03.655401    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:36:03.655407    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:36:03.659800    4721 logs.go:123] Gathering logs for kube-apiserver [bc78f59fb2e5] ...
	I1001 12:36:03.659807    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc78f59fb2e5"
	I1001 12:36:03.699046    4721 logs.go:123] Gathering logs for kube-proxy [2cdb05dca894] ...
	I1001 12:36:03.699058    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cdb05dca894"
	I1001 12:36:03.711174    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:36:03.711184    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:36:03.724445    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:36:03.724458    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:36:03.764048    4721 logs.go:123] Gathering logs for etcd [4d0f920ec84f] ...
	I1001 12:36:03.764056    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d0f920ec84f"
	I1001 12:36:03.777842    4721 logs.go:123] Gathering logs for etcd [316e5a1a5aed] ...
	I1001 12:36:03.777855    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e5a1a5aed"
	I1001 12:36:03.804461    4721 logs.go:123] Gathering logs for kube-scheduler [c952b19735c2] ...
	I1001 12:36:03.804477    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c952b19735c2"
	I1001 12:36:03.816737    4721 logs.go:123] Gathering logs for kube-controller-manager [ecbe68f7a6b4] ...
	I1001 12:36:03.816749    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecbe68f7a6b4"
	I1001 12:36:03.834305    4721 logs.go:123] Gathering logs for coredns [d04375a2ee30] ...
	I1001 12:36:03.834320    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04375a2ee30"
	I1001 12:36:03.845085    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:36:03.845097    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:36:03.867749    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:36:03.867757    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:36:03.901505    4721 logs.go:123] Gathering logs for kube-apiserver [956404de281e] ...
	I1001 12:36:03.901516    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 956404de281e"
	I1001 12:36:03.915917    4721 logs.go:123] Gathering logs for kube-scheduler [7ad38fcc33d6] ...
	I1001 12:36:03.915927    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad38fcc33d6"
	I1001 12:36:03.930643    4721 logs.go:123] Gathering logs for kube-controller-manager [d9956cf09477] ...
	I1001 12:36:03.930656    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9956cf09477"
	I1001 12:36:03.942579    4721 logs.go:123] Gathering logs for storage-provisioner [5cc1ba08286c] ...
	I1001 12:36:03.942588    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc1ba08286c"
	I1001 12:36:06.456007    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:36:11.458166    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:36:11.458304    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:36:11.476852    4721 logs.go:276] 2 containers: [956404de281e bc78f59fb2e5]
	I1001 12:36:11.476945    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:36:11.488913    4721 logs.go:276] 2 containers: [4d0f920ec84f 316e5a1a5aed]
	I1001 12:36:11.489004    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:36:11.499115    4721 logs.go:276] 1 containers: [d04375a2ee30]
	I1001 12:36:11.499189    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:36:11.509887    4721 logs.go:276] 2 containers: [c952b19735c2 7ad38fcc33d6]
	I1001 12:36:11.509964    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:36:11.520664    4721 logs.go:276] 1 containers: [2cdb05dca894]
	I1001 12:36:11.520754    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:36:11.531821    4721 logs.go:276] 2 containers: [ecbe68f7a6b4 d9956cf09477]
	I1001 12:36:11.531910    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:36:11.542380    4721 logs.go:276] 0 containers: []
	W1001 12:36:11.542394    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:36:11.542461    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:36:11.552961    4721 logs.go:276] 1 containers: [5cc1ba08286c]
	I1001 12:36:11.552980    4721 logs.go:123] Gathering logs for kube-apiserver [956404de281e] ...
	I1001 12:36:11.552985    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 956404de281e"
	I1001 12:36:11.568432    4721 logs.go:123] Gathering logs for storage-provisioner [5cc1ba08286c] ...
	I1001 12:36:11.568446    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc1ba08286c"
	I1001 12:36:11.581602    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:36:11.581616    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:36:11.603853    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:36:11.603864    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:36:11.640296    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:36:11.640305    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:36:11.675092    4721 logs.go:123] Gathering logs for kube-apiserver [bc78f59fb2e5] ...
	I1001 12:36:11.675107    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc78f59fb2e5"
	I1001 12:36:11.713299    4721 logs.go:123] Gathering logs for etcd [4d0f920ec84f] ...
	I1001 12:36:11.713311    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d0f920ec84f"
	I1001 12:36:11.727588    4721 logs.go:123] Gathering logs for kube-scheduler [7ad38fcc33d6] ...
	I1001 12:36:11.727603    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad38fcc33d6"
	I1001 12:36:11.742651    4721 logs.go:123] Gathering logs for kube-controller-manager [ecbe68f7a6b4] ...
	I1001 12:36:11.742662    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecbe68f7a6b4"
	I1001 12:36:11.760419    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:36:11.760434    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:36:11.764601    4721 logs.go:123] Gathering logs for kube-proxy [2cdb05dca894] ...
	I1001 12:36:11.764612    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cdb05dca894"
	I1001 12:36:11.776252    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:36:11.776268    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:36:11.787431    4721 logs.go:123] Gathering logs for etcd [316e5a1a5aed] ...
	I1001 12:36:11.787442    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e5a1a5aed"
	I1001 12:36:11.806922    4721 logs.go:123] Gathering logs for kube-scheduler [c952b19735c2] ...
	I1001 12:36:11.806937    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c952b19735c2"
	I1001 12:36:11.822664    4721 logs.go:123] Gathering logs for kube-controller-manager [d9956cf09477] ...
	I1001 12:36:11.822677    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9956cf09477"
	I1001 12:36:11.834754    4721 logs.go:123] Gathering logs for coredns [d04375a2ee30] ...
	I1001 12:36:11.834766    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04375a2ee30"
	I1001 12:36:14.348819    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:36:19.351078    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:36:19.351276    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:36:19.367744    4721 logs.go:276] 2 containers: [956404de281e bc78f59fb2e5]
	I1001 12:36:19.367843    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:36:19.378845    4721 logs.go:276] 2 containers: [4d0f920ec84f 316e5a1a5aed]
	I1001 12:36:19.378939    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:36:19.389868    4721 logs.go:276] 1 containers: [d04375a2ee30]
	I1001 12:36:19.389957    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:36:19.401387    4721 logs.go:276] 2 containers: [c952b19735c2 7ad38fcc33d6]
	I1001 12:36:19.401466    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:36:19.411760    4721 logs.go:276] 1 containers: [2cdb05dca894]
	I1001 12:36:19.411832    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:36:19.422542    4721 logs.go:276] 2 containers: [ecbe68f7a6b4 d9956cf09477]
	I1001 12:36:19.422627    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:36:19.432929    4721 logs.go:276] 0 containers: []
	W1001 12:36:19.432942    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:36:19.433018    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:36:19.443657    4721 logs.go:276] 1 containers: [5cc1ba08286c]
	I1001 12:36:19.443675    4721 logs.go:123] Gathering logs for kube-scheduler [c952b19735c2] ...
	I1001 12:36:19.443680    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c952b19735c2"
	I1001 12:36:19.456076    4721 logs.go:123] Gathering logs for kube-proxy [2cdb05dca894] ...
	I1001 12:36:19.456087    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cdb05dca894"
	I1001 12:36:19.468194    4721 logs.go:123] Gathering logs for coredns [d04375a2ee30] ...
	I1001 12:36:19.468209    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04375a2ee30"
	I1001 12:36:19.479745    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:36:19.479757    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:36:19.491254    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:36:19.491268    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:36:19.495883    4721 logs.go:123] Gathering logs for kube-apiserver [956404de281e] ...
	I1001 12:36:19.495890    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 956404de281e"
	I1001 12:36:19.509653    4721 logs.go:123] Gathering logs for etcd [316e5a1a5aed] ...
	I1001 12:36:19.509666    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e5a1a5aed"
	I1001 12:36:19.523871    4721 logs.go:123] Gathering logs for kube-controller-manager [ecbe68f7a6b4] ...
	I1001 12:36:19.523888    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecbe68f7a6b4"
	I1001 12:36:19.544391    4721 logs.go:123] Gathering logs for storage-provisioner [5cc1ba08286c] ...
	I1001 12:36:19.544410    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc1ba08286c"
	I1001 12:36:19.556391    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:36:19.556402    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:36:19.580177    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:36:19.580184    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:36:19.617932    4721 logs.go:123] Gathering logs for kube-apiserver [bc78f59fb2e5] ...
	I1001 12:36:19.617939    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc78f59fb2e5"
	I1001 12:36:19.661432    4721 logs.go:123] Gathering logs for etcd [4d0f920ec84f] ...
	I1001 12:36:19.661449    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d0f920ec84f"
	I1001 12:36:19.675480    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:36:19.675496    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:36:19.712829    4721 logs.go:123] Gathering logs for kube-scheduler [7ad38fcc33d6] ...
	I1001 12:36:19.712840    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad38fcc33d6"
	I1001 12:36:19.727796    4721 logs.go:123] Gathering logs for kube-controller-manager [d9956cf09477] ...
	I1001 12:36:19.727810    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9956cf09477"
	I1001 12:36:22.241575    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:36:27.243844    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:36:27.244201    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:36:27.271551    4721 logs.go:276] 2 containers: [956404de281e bc78f59fb2e5]
	I1001 12:36:27.271748    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:36:27.289581    4721 logs.go:276] 2 containers: [4d0f920ec84f 316e5a1a5aed]
	I1001 12:36:27.289706    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:36:27.306381    4721 logs.go:276] 1 containers: [d04375a2ee30]
	I1001 12:36:27.306475    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:36:27.321324    4721 logs.go:276] 2 containers: [c952b19735c2 7ad38fcc33d6]
	I1001 12:36:27.321407    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:36:27.331771    4721 logs.go:276] 1 containers: [2cdb05dca894]
	I1001 12:36:27.331852    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:36:27.342668    4721 logs.go:276] 2 containers: [ecbe68f7a6b4 d9956cf09477]
	I1001 12:36:27.342744    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:36:27.352886    4721 logs.go:276] 0 containers: []
	W1001 12:36:27.352900    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:36:27.352976    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:36:27.364942    4721 logs.go:276] 1 containers: [5cc1ba08286c]
	I1001 12:36:27.364963    4721 logs.go:123] Gathering logs for coredns [d04375a2ee30] ...
	I1001 12:36:27.364969    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04375a2ee30"
	I1001 12:36:27.376112    4721 logs.go:123] Gathering logs for kube-scheduler [c952b19735c2] ...
	I1001 12:36:27.376127    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c952b19735c2"
	I1001 12:36:27.387655    4721 logs.go:123] Gathering logs for kube-scheduler [7ad38fcc33d6] ...
	I1001 12:36:27.387666    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad38fcc33d6"
	I1001 12:36:27.404866    4721 logs.go:123] Gathering logs for kube-proxy [2cdb05dca894] ...
	I1001 12:36:27.404878    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cdb05dca894"
	I1001 12:36:27.417352    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:36:27.417365    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:36:27.429060    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:36:27.429075    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:36:27.463063    4721 logs.go:123] Gathering logs for kube-apiserver [956404de281e] ...
	I1001 12:36:27.463075    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 956404de281e"
	I1001 12:36:27.479112    4721 logs.go:123] Gathering logs for etcd [316e5a1a5aed] ...
	I1001 12:36:27.479123    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e5a1a5aed"
	I1001 12:36:27.493321    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:36:27.493333    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:36:27.514961    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:36:27.514969    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:36:27.519443    4721 logs.go:123] Gathering logs for kube-apiserver [bc78f59fb2e5] ...
	I1001 12:36:27.519450    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc78f59fb2e5"
	I1001 12:36:27.557860    4721 logs.go:123] Gathering logs for kube-controller-manager [d9956cf09477] ...
	I1001 12:36:27.557872    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9956cf09477"
	I1001 12:36:27.569938    4721 logs.go:123] Gathering logs for storage-provisioner [5cc1ba08286c] ...
	I1001 12:36:27.569950    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc1ba08286c"
	I1001 12:36:27.585115    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:36:27.585130    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:36:27.621527    4721 logs.go:123] Gathering logs for etcd [4d0f920ec84f] ...
	I1001 12:36:27.621538    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d0f920ec84f"
	I1001 12:36:27.635290    4721 logs.go:123] Gathering logs for kube-controller-manager [ecbe68f7a6b4] ...
	I1001 12:36:27.635300    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecbe68f7a6b4"
	I1001 12:36:30.154702    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:36:35.156953    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:36:35.157318    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:36:35.190759    4721 logs.go:276] 2 containers: [956404de281e bc78f59fb2e5]
	I1001 12:36:35.190915    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:36:35.210160    4721 logs.go:276] 2 containers: [4d0f920ec84f 316e5a1a5aed]
	I1001 12:36:35.210275    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:36:35.225427    4721 logs.go:276] 1 containers: [d04375a2ee30]
	I1001 12:36:35.225522    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:36:35.237543    4721 logs.go:276] 2 containers: [c952b19735c2 7ad38fcc33d6]
	I1001 12:36:35.237637    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:36:35.249206    4721 logs.go:276] 1 containers: [2cdb05dca894]
	I1001 12:36:35.249289    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:36:35.259860    4721 logs.go:276] 2 containers: [ecbe68f7a6b4 d9956cf09477]
	I1001 12:36:35.259944    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:36:35.270826    4721 logs.go:276] 0 containers: []
	W1001 12:36:35.270838    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:36:35.270908    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:36:35.281464    4721 logs.go:276] 1 containers: [5cc1ba08286c]
	I1001 12:36:35.281482    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:36:35.281487    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:36:35.317935    4721 logs.go:123] Gathering logs for etcd [4d0f920ec84f] ...
	I1001 12:36:35.317944    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d0f920ec84f"
	I1001 12:36:35.331606    4721 logs.go:123] Gathering logs for storage-provisioner [5cc1ba08286c] ...
	I1001 12:36:35.331615    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc1ba08286c"
	I1001 12:36:35.343187    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:36:35.343201    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:36:35.355568    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:36:35.355580    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:36:35.359774    4721 logs.go:123] Gathering logs for kube-controller-manager [ecbe68f7a6b4] ...
	I1001 12:36:35.359781    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecbe68f7a6b4"
	I1001 12:36:35.378017    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:36:35.378028    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:36:35.401407    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:36:35.401421    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:36:35.444588    4721 logs.go:123] Gathering logs for etcd [316e5a1a5aed] ...
	I1001 12:36:35.444605    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e5a1a5aed"
	I1001 12:36:35.477839    4721 logs.go:123] Gathering logs for kube-scheduler [7ad38fcc33d6] ...
	I1001 12:36:35.477856    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad38fcc33d6"
	I1001 12:36:35.492389    4721 logs.go:123] Gathering logs for kube-proxy [2cdb05dca894] ...
	I1001 12:36:35.492401    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cdb05dca894"
	I1001 12:36:35.504060    4721 logs.go:123] Gathering logs for kube-apiserver [956404de281e] ...
	I1001 12:36:35.504071    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 956404de281e"
	I1001 12:36:35.517617    4721 logs.go:123] Gathering logs for kube-apiserver [bc78f59fb2e5] ...
	I1001 12:36:35.517630    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc78f59fb2e5"
	I1001 12:36:35.554629    4721 logs.go:123] Gathering logs for coredns [d04375a2ee30] ...
	I1001 12:36:35.554640    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04375a2ee30"
	I1001 12:36:35.570068    4721 logs.go:123] Gathering logs for kube-scheduler [c952b19735c2] ...
	I1001 12:36:35.570080    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c952b19735c2"
	I1001 12:36:35.581721    4721 logs.go:123] Gathering logs for kube-controller-manager [d9956cf09477] ...
	I1001 12:36:35.581732    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9956cf09477"
	I1001 12:36:38.096722    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:36:43.099391    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:36:43.099948    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:36:43.143598    4721 logs.go:276] 2 containers: [956404de281e bc78f59fb2e5]
	I1001 12:36:43.143778    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:36:43.163798    4721 logs.go:276] 2 containers: [4d0f920ec84f 316e5a1a5aed]
	I1001 12:36:43.163917    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:36:43.180662    4721 logs.go:276] 1 containers: [d04375a2ee30]
	I1001 12:36:43.180757    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:36:43.193202    4721 logs.go:276] 2 containers: [c952b19735c2 7ad38fcc33d6]
	I1001 12:36:43.193301    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:36:43.208166    4721 logs.go:276] 1 containers: [2cdb05dca894]
	I1001 12:36:43.208249    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:36:43.220060    4721 logs.go:276] 2 containers: [ecbe68f7a6b4 d9956cf09477]
	I1001 12:36:43.220147    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:36:43.232124    4721 logs.go:276] 0 containers: []
	W1001 12:36:43.232136    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:36:43.232215    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:36:43.242734    4721 logs.go:276] 1 containers: [5cc1ba08286c]
	I1001 12:36:43.242751    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:36:43.242758    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:36:43.279822    4721 logs.go:123] Gathering logs for kube-scheduler [c952b19735c2] ...
	I1001 12:36:43.279835    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c952b19735c2"
	I1001 12:36:43.292094    4721 logs.go:123] Gathering logs for kube-controller-manager [ecbe68f7a6b4] ...
	I1001 12:36:43.292108    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecbe68f7a6b4"
	I1001 12:36:43.309792    4721 logs.go:123] Gathering logs for storage-provisioner [5cc1ba08286c] ...
	I1001 12:36:43.309805    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5cc1ba08286c"
	I1001 12:36:43.321350    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:36:43.321362    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:36:43.346482    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:36:43.346492    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:36:43.350654    4721 logs.go:123] Gathering logs for etcd [4d0f920ec84f] ...
	I1001 12:36:43.350662    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d0f920ec84f"
	I1001 12:36:43.364816    4721 logs.go:123] Gathering logs for coredns [d04375a2ee30] ...
	I1001 12:36:43.364830    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d04375a2ee30"
	I1001 12:36:43.397706    4721 logs.go:123] Gathering logs for kube-scheduler [7ad38fcc33d6] ...
	I1001 12:36:43.397718    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ad38fcc33d6"
	I1001 12:36:43.413523    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:36:43.413537    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:36:43.452120    4721 logs.go:123] Gathering logs for kube-controller-manager [d9956cf09477] ...
	I1001 12:36:43.452130    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9956cf09477"
	I1001 12:36:43.464602    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:36:43.464615    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:36:43.478645    4721 logs.go:123] Gathering logs for kube-apiserver [956404de281e] ...
	I1001 12:36:43.478661    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 956404de281e"
	I1001 12:36:43.492847    4721 logs.go:123] Gathering logs for kube-apiserver [bc78f59fb2e5] ...
	I1001 12:36:43.492856    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc78f59fb2e5"
	I1001 12:36:43.530449    4721 logs.go:123] Gathering logs for etcd [316e5a1a5aed] ...
	I1001 12:36:43.530471    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 316e5a1a5aed"
	I1001 12:36:43.549104    4721 logs.go:123] Gathering logs for kube-proxy [2cdb05dca894] ...
	I1001 12:36:43.549114    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2cdb05dca894"
	I1001 12:36:46.062490    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:36:51.064828    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:36:51.065001    4721 kubeadm.go:597] duration metric: took 4m3.886574291s to restartPrimaryControlPlane
	W1001 12:36:51.065114    4721 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1001 12:36:51.065169    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1001 12:36:52.117877    4721 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.052720833s)
	I1001 12:36:52.117949    4721 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 12:36:52.123053    4721 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 12:36:52.126155    4721 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 12:36:52.129592    4721 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 12:36:52.129601    4721 kubeadm.go:157] found existing configuration files:
	
	I1001 12:36:52.129627    4721 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/admin.conf
	I1001 12:36:52.132923    4721 kubeadm.go:163] "https://control-plane.minikube.internal:50511" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 12:36:52.132951    4721 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 12:36:52.135873    4721 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/kubelet.conf
	I1001 12:36:52.138524    4721 kubeadm.go:163] "https://control-plane.minikube.internal:50511" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 12:36:52.138552    4721 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 12:36:52.141903    4721 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/controller-manager.conf
	I1001 12:36:52.145117    4721 kubeadm.go:163] "https://control-plane.minikube.internal:50511" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 12:36:52.145149    4721 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 12:36:52.148088    4721 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/scheduler.conf
	I1001 12:36:52.150655    4721 kubeadm.go:163] "https://control-plane.minikube.internal:50511" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50511 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 12:36:52.150679    4721 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 12:36:52.153979    4721 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1001 12:36:52.170950    4721 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1001 12:36:52.170991    4721 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 12:36:52.221830    4721 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 12:36:52.221887    4721 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 12:36:52.222015    4721 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1001 12:36:52.273385    4721 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 12:36:52.277676    4721 out.go:235]   - Generating certificates and keys ...
	I1001 12:36:52.277712    4721 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 12:36:52.277751    4721 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 12:36:52.277797    4721 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1001 12:36:52.277830    4721 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1001 12:36:52.277875    4721 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1001 12:36:52.277903    4721 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1001 12:36:52.277944    4721 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1001 12:36:52.277979    4721 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1001 12:36:52.278021    4721 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1001 12:36:52.278072    4721 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1001 12:36:52.278091    4721 kubeadm.go:310] [certs] Using the existing "sa" key
	I1001 12:36:52.278123    4721 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 12:36:52.446212    4721 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 12:36:52.505021    4721 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 12:36:52.636464    4721 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 12:36:52.683470    4721 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 12:36:52.713766    4721 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 12:36:52.714201    4721 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 12:36:52.714308    4721 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 12:36:52.810731    4721 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 12:36:52.817872    4721 out.go:235]   - Booting up control plane ...
	I1001 12:36:52.817924    4721 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 12:36:52.817963    4721 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 12:36:52.818024    4721 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 12:36:52.818061    4721 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 12:36:52.818153    4721 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1001 12:36:57.316134    4721 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501314 seconds
	I1001 12:36:57.316201    4721 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1001 12:36:57.320322    4721 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1001 12:36:57.841517    4721 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1001 12:36:57.841762    4721 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-340000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1001 12:36:58.346273    4721 kubeadm.go:310] [bootstrap-token] Using token: 55wevq.3qkjkejxbsnf8vog
	I1001 12:36:58.348793    4721 out.go:235]   - Configuring RBAC rules ...
	I1001 12:36:58.348849    4721 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1001 12:36:58.348899    4721 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1001 12:36:58.355312    4721 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1001 12:36:58.356199    4721 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1001 12:36:58.357114    4721 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1001 12:36:58.358031    4721 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1001 12:36:58.362339    4721 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1001 12:36:58.544001    4721 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1001 12:36:58.751979    4721 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1001 12:36:58.752457    4721 kubeadm.go:310] 
	I1001 12:36:58.752498    4721 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1001 12:36:58.752509    4721 kubeadm.go:310] 
	I1001 12:36:58.752555    4721 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1001 12:36:58.752563    4721 kubeadm.go:310] 
	I1001 12:36:58.752583    4721 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1001 12:36:58.752618    4721 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1001 12:36:58.752648    4721 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1001 12:36:58.752652    4721 kubeadm.go:310] 
	I1001 12:36:58.752693    4721 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1001 12:36:58.752699    4721 kubeadm.go:310] 
	I1001 12:36:58.752729    4721 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1001 12:36:58.752733    4721 kubeadm.go:310] 
	I1001 12:36:58.752770    4721 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1001 12:36:58.752810    4721 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1001 12:36:58.752854    4721 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1001 12:36:58.752858    4721 kubeadm.go:310] 
	I1001 12:36:58.752909    4721 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1001 12:36:58.752961    4721 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1001 12:36:58.752966    4721 kubeadm.go:310] 
	I1001 12:36:58.753010    4721 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 55wevq.3qkjkejxbsnf8vog \
	I1001 12:36:58.753075    4721 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1bec8634fed302f64212571ed3ed0831b844a21f4f42ed3778332e10a4ff7e9e \
	I1001 12:36:58.753087    4721 kubeadm.go:310] 	--control-plane 
	I1001 12:36:58.753092    4721 kubeadm.go:310] 
	I1001 12:36:58.753137    4721 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1001 12:36:58.753140    4721 kubeadm.go:310] 
	I1001 12:36:58.753201    4721 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 55wevq.3qkjkejxbsnf8vog \
	I1001 12:36:58.753250    4721 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1bec8634fed302f64212571ed3ed0831b844a21f4f42ed3778332e10a4ff7e9e 
	I1001 12:36:58.753385    4721 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1001 12:36:58.753394    4721 cni.go:84] Creating CNI manager for ""
	I1001 12:36:58.753401    4721 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 12:36:58.757097    4721 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1001 12:36:58.764035    4721 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1001 12:36:58.767083    4721 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1001 12:36:58.774447    4721 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1001 12:36:58.774519    4721 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-340000 minikube.k8s.io/updated_at=2024_10_01T12_36_58_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4 minikube.k8s.io/name=stopped-upgrade-340000 minikube.k8s.io/primary=true
	I1001 12:36:58.774520    4721 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 12:36:58.809772    4721 kubeadm.go:1113] duration metric: took 35.309458ms to wait for elevateKubeSystemPrivileges
	I1001 12:36:58.815288    4721 ops.go:34] apiserver oom_adj: -16
	I1001 12:36:58.815297    4721 kubeadm.go:394] duration metric: took 4m11.653538458s to StartCluster
	I1001 12:36:58.815307    4721 settings.go:142] acquiring lock: {Name:mk456a8b96b1746a679d3a85129b9d4d9b38bdfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 12:36:58.815398    4721 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19736-1073/kubeconfig
	I1001 12:36:58.815806    4721 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19736-1073/kubeconfig: {Name:mkdfe60702c76fe804796a27b08676f2ebb5427f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 12:36:58.816036    4721 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 12:36:58.816077    4721 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1001 12:36:58.816113    4721 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-340000"
	I1001 12:36:58.816121    4721 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-340000"
	I1001 12:36:58.816121    4721 config.go:182] Loaded profile config "stopped-upgrade-340000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	W1001 12:36:58.816124    4721 addons.go:243] addon storage-provisioner should already be in state true
	I1001 12:36:58.816144    4721 host.go:66] Checking if "stopped-upgrade-340000" exists ...
	I1001 12:36:58.816179    4721 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-340000"
	I1001 12:36:58.816190    4721 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-340000"
	I1001 12:36:58.817070    4721 kapi.go:59] client config for stopped-upgrade-340000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/stopped-upgrade-340000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/stopped-upgrade-340000/client.key", CAFile:"/Users/jenkins/minikube-integration/19736-1073/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103e525d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1001 12:36:58.817189    4721 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-340000"
	W1001 12:36:58.817193    4721 addons.go:243] addon default-storageclass should already be in state true
	I1001 12:36:58.817201    4721 host.go:66] Checking if "stopped-upgrade-340000" exists ...
	I1001 12:36:58.820016    4721 out.go:177] * Verifying Kubernetes components...
	I1001 12:36:58.820321    4721 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1001 12:36:58.824189    4721 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1001 12:36:58.824196    4721 sshutil.go:53] new ssh client: &{IP:localhost Port:50476 SSHKeyPath:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/stopped-upgrade-340000/id_rsa Username:docker}
	I1001 12:36:58.827954    4721 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 12:36:58.832050    4721 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 12:36:58.836096    4721 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 12:36:58.836104    4721 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1001 12:36:58.836110    4721 sshutil.go:53] new ssh client: &{IP:localhost Port:50476 SSHKeyPath:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/stopped-upgrade-340000/id_rsa Username:docker}
	I1001 12:36:58.908983    4721 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 12:36:58.914444    4721 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1001 12:36:58.917780    4721 api_server.go:52] waiting for apiserver process to appear ...
	I1001 12:36:58.917830    4721 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 12:36:58.947920    4721 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 12:36:59.266026    4721 api_server.go:72] duration metric: took 449.98825ms to wait for apiserver process to appear ...
	I1001 12:36:59.266040    4721 api_server.go:88] waiting for apiserver healthz status ...
	I1001 12:36:59.266052    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:36:59.266469    4721 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1001 12:36:59.266478    4721 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1001 12:37:04.268027    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:37:04.268084    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:37:09.268308    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:37:09.268341    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:37:14.268527    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:37:14.268576    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:37:19.268919    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:37:19.268939    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:37:24.269385    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:37:24.269438    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1001 12:37:29.267470    4721 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1001 12:37:29.270037    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:37:29.270056    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:37:29.270729    4721 out.go:177] * Enabled addons: storage-provisioner
	I1001 12:37:29.276589    4721 addons.go:510] duration metric: took 30.461293709s for enable addons: enabled=[storage-provisioner]
	I1001 12:37:34.271039    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:37:34.271114    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:37:39.272345    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:37:39.272389    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:37:44.273916    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:37:44.273964    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:37:49.275978    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:37:49.276021    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:37:54.278276    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:37:54.278366    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:37:59.279379    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:37:59.279488    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:37:59.291998    4721 logs.go:276] 1 containers: [b39b64d2ea7c]
	I1001 12:37:59.292080    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:37:59.303027    4721 logs.go:276] 1 containers: [b8681044ebb8]
	I1001 12:37:59.303115    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:37:59.313833    4721 logs.go:276] 2 containers: [01000004d151 e8cac5e28698]
	I1001 12:37:59.313919    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:37:59.323704    4721 logs.go:276] 1 containers: [1c059d37a4b1]
	I1001 12:37:59.323791    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:37:59.334556    4721 logs.go:276] 1 containers: [3fa320c5a26b]
	I1001 12:37:59.334640    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:37:59.345155    4721 logs.go:276] 1 containers: [38de77956fbf]
	I1001 12:37:59.345239    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:37:59.355381    4721 logs.go:276] 0 containers: []
	W1001 12:37:59.355393    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:37:59.355467    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:37:59.365818    4721 logs.go:276] 1 containers: [86bbc96e040d]
	I1001 12:37:59.365835    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:37:59.365840    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:37:59.399559    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:37:59.399576    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:37:59.404188    4721 logs.go:123] Gathering logs for kube-apiserver [b39b64d2ea7c] ...
	I1001 12:37:59.404195    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b39b64d2ea7c"
	I1001 12:37:59.418750    4721 logs.go:123] Gathering logs for etcd [b8681044ebb8] ...
	I1001 12:37:59.418759    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8681044ebb8"
	I1001 12:37:59.432922    4721 logs.go:123] Gathering logs for coredns [e8cac5e28698] ...
	I1001 12:37:59.432935    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8cac5e28698"
	I1001 12:37:59.444349    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:37:59.444361    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:37:59.455890    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:37:59.455904    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:37:59.492779    4721 logs.go:123] Gathering logs for coredns [01000004d151] ...
	I1001 12:37:59.492795    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01000004d151"
	I1001 12:37:59.505186    4721 logs.go:123] Gathering logs for kube-scheduler [1c059d37a4b1] ...
	I1001 12:37:59.505200    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c059d37a4b1"
	I1001 12:37:59.524178    4721 logs.go:123] Gathering logs for kube-proxy [3fa320c5a26b] ...
	I1001 12:37:59.524193    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa320c5a26b"
	I1001 12:37:59.535390    4721 logs.go:123] Gathering logs for kube-controller-manager [38de77956fbf] ...
	I1001 12:37:59.535407    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38de77956fbf"
	I1001 12:37:59.559829    4721 logs.go:123] Gathering logs for storage-provisioner [86bbc96e040d] ...
	I1001 12:37:59.559840    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86bbc96e040d"
	I1001 12:37:59.571412    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:37:59.571422    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:38:02.099043    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:38:07.101551    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:38:07.101784    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:38:07.122736    4721 logs.go:276] 1 containers: [b39b64d2ea7c]
	I1001 12:38:07.122858    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:38:07.137813    4721 logs.go:276] 1 containers: [b8681044ebb8]
	I1001 12:38:07.137898    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:38:07.150666    4721 logs.go:276] 2 containers: [01000004d151 e8cac5e28698]
	I1001 12:38:07.150758    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:38:07.162039    4721 logs.go:276] 1 containers: [1c059d37a4b1]
	I1001 12:38:07.162126    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:38:07.172683    4721 logs.go:276] 1 containers: [3fa320c5a26b]
	I1001 12:38:07.172767    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:38:07.183243    4721 logs.go:276] 1 containers: [38de77956fbf]
	I1001 12:38:07.183328    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:38:07.193322    4721 logs.go:276] 0 containers: []
	W1001 12:38:07.193334    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:38:07.193408    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:38:07.204427    4721 logs.go:276] 1 containers: [86bbc96e040d]
	I1001 12:38:07.204443    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:38:07.204449    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:38:07.245441    4721 logs.go:123] Gathering logs for etcd [b8681044ebb8] ...
	I1001 12:38:07.245457    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8681044ebb8"
	I1001 12:38:07.259142    4721 logs.go:123] Gathering logs for coredns [e8cac5e28698] ...
	I1001 12:38:07.259153    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8cac5e28698"
	I1001 12:38:07.270977    4721 logs.go:123] Gathering logs for kube-scheduler [1c059d37a4b1] ...
	I1001 12:38:07.270989    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c059d37a4b1"
	I1001 12:38:07.285810    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:38:07.285825    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:38:07.297195    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:38:07.297205    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:38:07.301654    4721 logs.go:123] Gathering logs for kube-apiserver [b39b64d2ea7c] ...
	I1001 12:38:07.301659    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b39b64d2ea7c"
	I1001 12:38:07.316006    4721 logs.go:123] Gathering logs for coredns [01000004d151] ...
	I1001 12:38:07.316016    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01000004d151"
	I1001 12:38:07.328901    4721 logs.go:123] Gathering logs for kube-proxy [3fa320c5a26b] ...
	I1001 12:38:07.328921    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa320c5a26b"
	I1001 12:38:07.343680    4721 logs.go:123] Gathering logs for kube-controller-manager [38de77956fbf] ...
	I1001 12:38:07.343694    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38de77956fbf"
	I1001 12:38:07.362056    4721 logs.go:123] Gathering logs for storage-provisioner [86bbc96e040d] ...
	I1001 12:38:07.362068    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86bbc96e040d"
	I1001 12:38:07.373860    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:38:07.373872    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:38:07.398074    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:38:07.398085    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:38:09.933982    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:38:14.935151    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:38:14.935298    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:38:14.948449    4721 logs.go:276] 1 containers: [b39b64d2ea7c]
	I1001 12:38:14.948534    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:38:14.961296    4721 logs.go:276] 1 containers: [b8681044ebb8]
	I1001 12:38:14.961393    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:38:14.975284    4721 logs.go:276] 2 containers: [01000004d151 e8cac5e28698]
	I1001 12:38:14.975377    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:38:14.989777    4721 logs.go:276] 1 containers: [1c059d37a4b1]
	I1001 12:38:14.989866    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:38:15.003380    4721 logs.go:276] 1 containers: [3fa320c5a26b]
	I1001 12:38:15.003477    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:38:15.017008    4721 logs.go:276] 1 containers: [38de77956fbf]
	I1001 12:38:15.017082    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:38:15.029833    4721 logs.go:276] 0 containers: []
	W1001 12:38:15.029846    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:38:15.029918    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:38:15.042526    4721 logs.go:276] 1 containers: [86bbc96e040d]
	I1001 12:38:15.042545    4721 logs.go:123] Gathering logs for coredns [e8cac5e28698] ...
	I1001 12:38:15.042551    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8cac5e28698"
	I1001 12:38:15.058543    4721 logs.go:123] Gathering logs for kube-scheduler [1c059d37a4b1] ...
	I1001 12:38:15.058556    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c059d37a4b1"
	I1001 12:38:15.076900    4721 logs.go:123] Gathering logs for kube-proxy [3fa320c5a26b] ...
	I1001 12:38:15.076916    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa320c5a26b"
	I1001 12:38:15.091692    4721 logs.go:123] Gathering logs for kube-controller-manager [38de77956fbf] ...
	I1001 12:38:15.091707    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38de77956fbf"
	I1001 12:38:15.113887    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:38:15.113900    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:38:15.155130    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:38:15.155141    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:38:15.194810    4721 logs.go:123] Gathering logs for kube-apiserver [b39b64d2ea7c] ...
	I1001 12:38:15.194827    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b39b64d2ea7c"
	I1001 12:38:15.211452    4721 logs.go:123] Gathering logs for etcd [b8681044ebb8] ...
	I1001 12:38:15.211463    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8681044ebb8"
	I1001 12:38:15.229572    4721 logs.go:123] Gathering logs for storage-provisioner [86bbc96e040d] ...
	I1001 12:38:15.229587    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86bbc96e040d"
	I1001 12:38:15.243058    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:38:15.243072    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:38:15.265347    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:38:15.265359    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:38:15.269915    4721 logs.go:123] Gathering logs for coredns [01000004d151] ...
	I1001 12:38:15.269924    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01000004d151"
	I1001 12:38:15.281542    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:38:15.281557    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:38:17.809181    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:38:22.811955    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:38:22.812469    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:38:22.846976    4721 logs.go:276] 1 containers: [b39b64d2ea7c]
	I1001 12:38:22.847145    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:38:22.867757    4721 logs.go:276] 1 containers: [b8681044ebb8]
	I1001 12:38:22.867864    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:38:22.886141    4721 logs.go:276] 2 containers: [01000004d151 e8cac5e28698]
	I1001 12:38:22.886225    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:38:22.901386    4721 logs.go:276] 1 containers: [1c059d37a4b1]
	I1001 12:38:22.901461    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:38:22.911840    4721 logs.go:276] 1 containers: [3fa320c5a26b]
	I1001 12:38:22.911916    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:38:22.922177    4721 logs.go:276] 1 containers: [38de77956fbf]
	I1001 12:38:22.922251    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:38:22.932123    4721 logs.go:276] 0 containers: []
	W1001 12:38:22.932136    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:38:22.932207    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:38:22.942565    4721 logs.go:276] 1 containers: [86bbc96e040d]
	I1001 12:38:22.942579    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:38:22.942584    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:38:22.947500    4721 logs.go:123] Gathering logs for coredns [01000004d151] ...
	I1001 12:38:22.947508    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01000004d151"
	I1001 12:38:22.965987    4721 logs.go:123] Gathering logs for coredns [e8cac5e28698] ...
	I1001 12:38:22.966002    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8cac5e28698"
	I1001 12:38:22.977309    4721 logs.go:123] Gathering logs for kube-proxy [3fa320c5a26b] ...
	I1001 12:38:22.977321    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa320c5a26b"
	I1001 12:38:22.988529    4721 logs.go:123] Gathering logs for storage-provisioner [86bbc96e040d] ...
	I1001 12:38:22.988539    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86bbc96e040d"
	I1001 12:38:22.999811    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:38:22.999827    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:38:23.025033    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:38:23.025056    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:38:23.062849    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:38:23.062866    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:38:23.096973    4721 logs.go:123] Gathering logs for kube-apiserver [b39b64d2ea7c] ...
	I1001 12:38:23.096989    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b39b64d2ea7c"
	I1001 12:38:23.111504    4721 logs.go:123] Gathering logs for etcd [b8681044ebb8] ...
	I1001 12:38:23.111515    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8681044ebb8"
	I1001 12:38:23.125356    4721 logs.go:123] Gathering logs for kube-scheduler [1c059d37a4b1] ...
	I1001 12:38:23.125367    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c059d37a4b1"
	I1001 12:38:23.139605    4721 logs.go:123] Gathering logs for kube-controller-manager [38de77956fbf] ...
	I1001 12:38:23.139617    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38de77956fbf"
	I1001 12:38:23.156834    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:38:23.156844    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:38:25.669913    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:38:30.672743    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:38:30.673289    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:38:30.716667    4721 logs.go:276] 1 containers: [b39b64d2ea7c]
	I1001 12:38:30.716816    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:38:30.735666    4721 logs.go:276] 1 containers: [b8681044ebb8]
	I1001 12:38:30.735776    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:38:30.749621    4721 logs.go:276] 2 containers: [01000004d151 e8cac5e28698]
	I1001 12:38:30.749708    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:38:30.760902    4721 logs.go:276] 1 containers: [1c059d37a4b1]
	I1001 12:38:30.760988    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:38:30.771099    4721 logs.go:276] 1 containers: [3fa320c5a26b]
	I1001 12:38:30.771184    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:38:30.781589    4721 logs.go:276] 1 containers: [38de77956fbf]
	I1001 12:38:30.781673    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:38:30.791677    4721 logs.go:276] 0 containers: []
	W1001 12:38:30.791688    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:38:30.791756    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:38:30.803309    4721 logs.go:276] 1 containers: [86bbc96e040d]
	I1001 12:38:30.803326    4721 logs.go:123] Gathering logs for kube-apiserver [b39b64d2ea7c] ...
	I1001 12:38:30.803332    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b39b64d2ea7c"
	I1001 12:38:30.818070    4721 logs.go:123] Gathering logs for coredns [01000004d151] ...
	I1001 12:38:30.818081    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01000004d151"
	I1001 12:38:30.833650    4721 logs.go:123] Gathering logs for coredns [e8cac5e28698] ...
	I1001 12:38:30.833661    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8cac5e28698"
	I1001 12:38:30.845141    4721 logs.go:123] Gathering logs for kube-proxy [3fa320c5a26b] ...
	I1001 12:38:30.845151    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa320c5a26b"
	I1001 12:38:30.856381    4721 logs.go:123] Gathering logs for kube-controller-manager [38de77956fbf] ...
	I1001 12:38:30.856392    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38de77956fbf"
	I1001 12:38:30.873282    4721 logs.go:123] Gathering logs for storage-provisioner [86bbc96e040d] ...
	I1001 12:38:30.873292    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86bbc96e040d"
	I1001 12:38:30.884251    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:38:30.884262    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:38:30.908067    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:38:30.908082    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:38:30.940914    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:38:30.940921    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:38:30.975892    4721 logs.go:123] Gathering logs for etcd [b8681044ebb8] ...
	I1001 12:38:30.975900    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8681044ebb8"
	I1001 12:38:30.989965    4721 logs.go:123] Gathering logs for kube-scheduler [1c059d37a4b1] ...
	I1001 12:38:30.989977    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c059d37a4b1"
	I1001 12:38:31.009581    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:38:31.009592    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:38:31.020896    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:38:31.020909    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:38:33.527106    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:38:38.529232    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:38:38.529772    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:38:38.565476    4721 logs.go:276] 1 containers: [b39b64d2ea7c]
	I1001 12:38:38.565652    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:38:38.585614    4721 logs.go:276] 1 containers: [b8681044ebb8]
	I1001 12:38:38.585730    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:38:38.600960    4721 logs.go:276] 2 containers: [01000004d151 e8cac5e28698]
	I1001 12:38:38.601049    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:38:38.615734    4721 logs.go:276] 1 containers: [1c059d37a4b1]
	I1001 12:38:38.615810    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:38:38.626365    4721 logs.go:276] 1 containers: [3fa320c5a26b]
	I1001 12:38:38.626438    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:38:38.637082    4721 logs.go:276] 1 containers: [38de77956fbf]
	I1001 12:38:38.637171    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:38:38.646972    4721 logs.go:276] 0 containers: []
	W1001 12:38:38.646985    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:38:38.647056    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:38:38.660768    4721 logs.go:276] 1 containers: [86bbc96e040d]
	I1001 12:38:38.660781    4721 logs.go:123] Gathering logs for kube-controller-manager [38de77956fbf] ...
	I1001 12:38:38.660787    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38de77956fbf"
	I1001 12:38:38.677788    4721 logs.go:123] Gathering logs for storage-provisioner [86bbc96e040d] ...
	I1001 12:38:38.677800    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86bbc96e040d"
	I1001 12:38:38.689201    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:38:38.689213    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:38:38.714157    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:38:38.714166    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:38:38.718371    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:38:38.718378    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:38:38.757956    4721 logs.go:123] Gathering logs for etcd [b8681044ebb8] ...
	I1001 12:38:38.757972    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8681044ebb8"
	I1001 12:38:38.772637    4721 logs.go:123] Gathering logs for coredns [01000004d151] ...
	I1001 12:38:38.772649    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01000004d151"
	I1001 12:38:38.789218    4721 logs.go:123] Gathering logs for kube-proxy [3fa320c5a26b] ...
	I1001 12:38:38.789231    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa320c5a26b"
	I1001 12:38:38.800934    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:38:38.800945    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:38:38.812710    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:38:38.812722    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:38:38.846864    4721 logs.go:123] Gathering logs for kube-apiserver [b39b64d2ea7c] ...
	I1001 12:38:38.846877    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b39b64d2ea7c"
	I1001 12:38:38.861579    4721 logs.go:123] Gathering logs for coredns [e8cac5e28698] ...
	I1001 12:38:38.861591    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8cac5e28698"
	I1001 12:38:38.880874    4721 logs.go:123] Gathering logs for kube-scheduler [1c059d37a4b1] ...
	I1001 12:38:38.880886    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c059d37a4b1"
	I1001 12:38:41.397625    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:38:46.400033    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:38:46.400658    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:38:46.440342    4721 logs.go:276] 1 containers: [b39b64d2ea7c]
	I1001 12:38:46.440504    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:38:46.461724    4721 logs.go:276] 1 containers: [b8681044ebb8]
	I1001 12:38:46.461866    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:38:46.476422    4721 logs.go:276] 2 containers: [01000004d151 e8cac5e28698]
	I1001 12:38:46.476503    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:38:46.488694    4721 logs.go:276] 1 containers: [1c059d37a4b1]
	I1001 12:38:46.488778    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:38:46.503973    4721 logs.go:276] 1 containers: [3fa320c5a26b]
	I1001 12:38:46.504054    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:38:46.514912    4721 logs.go:276] 1 containers: [38de77956fbf]
	I1001 12:38:46.514982    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:38:46.525301    4721 logs.go:276] 0 containers: []
	W1001 12:38:46.525313    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:38:46.525383    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:38:46.535604    4721 logs.go:276] 1 containers: [86bbc96e040d]
	I1001 12:38:46.535619    4721 logs.go:123] Gathering logs for coredns [01000004d151] ...
	I1001 12:38:46.535625    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01000004d151"
	I1001 12:38:46.547741    4721 logs.go:123] Gathering logs for coredns [e8cac5e28698] ...
	I1001 12:38:46.547756    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8cac5e28698"
	I1001 12:38:46.559277    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:38:46.559291    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:38:46.594003    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:38:46.594013    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:38:46.598611    4721 logs.go:123] Gathering logs for kube-apiserver [b39b64d2ea7c] ...
	I1001 12:38:46.598620    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b39b64d2ea7c"
	I1001 12:38:46.613721    4721 logs.go:123] Gathering logs for etcd [b8681044ebb8] ...
	I1001 12:38:46.613734    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8681044ebb8"
	I1001 12:38:46.627919    4721 logs.go:123] Gathering logs for storage-provisioner [86bbc96e040d] ...
	I1001 12:38:46.627933    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86bbc96e040d"
	I1001 12:38:46.643063    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:38:46.643077    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:38:46.667259    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:38:46.667267    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:38:46.678647    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:38:46.678661    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:38:46.714339    4721 logs.go:123] Gathering logs for kube-scheduler [1c059d37a4b1] ...
	I1001 12:38:46.714352    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c059d37a4b1"
	I1001 12:38:46.730621    4721 logs.go:123] Gathering logs for kube-proxy [3fa320c5a26b] ...
	I1001 12:38:46.730636    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa320c5a26b"
	I1001 12:38:46.742705    4721 logs.go:123] Gathering logs for kube-controller-manager [38de77956fbf] ...
	I1001 12:38:46.742721    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38de77956fbf"
	I1001 12:38:49.262051    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:38:54.264669    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:38:54.265143    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:38:54.302384    4721 logs.go:276] 1 containers: [b39b64d2ea7c]
	I1001 12:38:54.302546    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:38:54.323967    4721 logs.go:276] 1 containers: [b8681044ebb8]
	I1001 12:38:54.324090    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:38:54.338518    4721 logs.go:276] 2 containers: [01000004d151 e8cac5e28698]
	I1001 12:38:54.338611    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:38:54.350898    4721 logs.go:276] 1 containers: [1c059d37a4b1]
	I1001 12:38:54.350974    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:38:54.361344    4721 logs.go:276] 1 containers: [3fa320c5a26b]
	I1001 12:38:54.361432    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:38:54.371795    4721 logs.go:276] 1 containers: [38de77956fbf]
	I1001 12:38:54.371883    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:38:54.383468    4721 logs.go:276] 0 containers: []
	W1001 12:38:54.383480    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:38:54.383545    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:38:54.394394    4721 logs.go:276] 1 containers: [86bbc96e040d]
	I1001 12:38:54.394410    4721 logs.go:123] Gathering logs for etcd [b8681044ebb8] ...
	I1001 12:38:54.394418    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8681044ebb8"
	I1001 12:38:54.408545    4721 logs.go:123] Gathering logs for coredns [01000004d151] ...
	I1001 12:38:54.408557    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01000004d151"
	I1001 12:38:54.419871    4721 logs.go:123] Gathering logs for coredns [e8cac5e28698] ...
	I1001 12:38:54.419885    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8cac5e28698"
	I1001 12:38:54.432597    4721 logs.go:123] Gathering logs for kube-scheduler [1c059d37a4b1] ...
	I1001 12:38:54.432611    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c059d37a4b1"
	I1001 12:38:54.448970    4721 logs.go:123] Gathering logs for kube-controller-manager [38de77956fbf] ...
	I1001 12:38:54.448983    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38de77956fbf"
	I1001 12:38:54.466420    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:38:54.466430    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:38:54.499830    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:38:54.499838    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:38:54.503798    4721 logs.go:123] Gathering logs for kube-proxy [3fa320c5a26b] ...
	I1001 12:38:54.503807    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa320c5a26b"
	I1001 12:38:54.515590    4721 logs.go:123] Gathering logs for storage-provisioner [86bbc96e040d] ...
	I1001 12:38:54.515602    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86bbc96e040d"
	I1001 12:38:54.534471    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:38:54.534482    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:38:54.559554    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:38:54.559565    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:38:54.570565    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:38:54.570576    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:38:54.604576    4721 logs.go:123] Gathering logs for kube-apiserver [b39b64d2ea7c] ...
	I1001 12:38:54.604592    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b39b64d2ea7c"
	I1001 12:38:57.123937    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:39:02.126648    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:39:02.127297    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:39:02.168368    4721 logs.go:276] 1 containers: [b39b64d2ea7c]
	I1001 12:39:02.168540    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:39:02.190025    4721 logs.go:276] 1 containers: [b8681044ebb8]
	I1001 12:39:02.190161    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:39:02.205014    4721 logs.go:276] 4 containers: [5a30533944e5 3f8b26d3d50c 01000004d151 e8cac5e28698]
	I1001 12:39:02.205111    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:39:02.217319    4721 logs.go:276] 1 containers: [1c059d37a4b1]
	I1001 12:39:02.217405    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:39:02.233495    4721 logs.go:276] 1 containers: [3fa320c5a26b]
	I1001 12:39:02.233572    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:39:02.246560    4721 logs.go:276] 1 containers: [38de77956fbf]
	I1001 12:39:02.246646    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:39:02.257679    4721 logs.go:276] 0 containers: []
	W1001 12:39:02.257691    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:39:02.257756    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:39:02.272706    4721 logs.go:276] 1 containers: [86bbc96e040d]
	I1001 12:39:02.272724    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:39:02.272730    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:39:02.308021    4721 logs.go:123] Gathering logs for etcd [b8681044ebb8] ...
	I1001 12:39:02.308034    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8681044ebb8"
	I1001 12:39:02.322116    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:39:02.322130    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:39:02.333953    4721 logs.go:123] Gathering logs for coredns [e8cac5e28698] ...
	I1001 12:39:02.333965    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8cac5e28698"
	I1001 12:39:02.345463    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:39:02.345476    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:39:02.350155    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:39:02.350161    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:39:02.384523    4721 logs.go:123] Gathering logs for kube-apiserver [b39b64d2ea7c] ...
	I1001 12:39:02.384536    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b39b64d2ea7c"
	I1001 12:39:02.398858    4721 logs.go:123] Gathering logs for coredns [5a30533944e5] ...
	I1001 12:39:02.398872    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a30533944e5"
	I1001 12:39:02.410160    4721 logs.go:123] Gathering logs for coredns [01000004d151] ...
	I1001 12:39:02.410179    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01000004d151"
	I1001 12:39:02.421873    4721 logs.go:123] Gathering logs for kube-scheduler [1c059d37a4b1] ...
	I1001 12:39:02.421887    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c059d37a4b1"
	I1001 12:39:02.437475    4721 logs.go:123] Gathering logs for kube-proxy [3fa320c5a26b] ...
	I1001 12:39:02.437486    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa320c5a26b"
	I1001 12:39:02.449107    4721 logs.go:123] Gathering logs for storage-provisioner [86bbc96e040d] ...
	I1001 12:39:02.449119    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86bbc96e040d"
	I1001 12:39:02.460575    4721 logs.go:123] Gathering logs for coredns [3f8b26d3d50c] ...
	I1001 12:39:02.460584    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f8b26d3d50c"
	I1001 12:39:02.472758    4721 logs.go:123] Gathering logs for kube-controller-manager [38de77956fbf] ...
	I1001 12:39:02.472772    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38de77956fbf"
	I1001 12:39:02.493849    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:39:02.493863    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:39:05.019751    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:39:10.020641    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:39:10.021213    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:39:10.062851    4721 logs.go:276] 1 containers: [b39b64d2ea7c]
	I1001 12:39:10.063005    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:39:10.085242    4721 logs.go:276] 1 containers: [b8681044ebb8]
	I1001 12:39:10.085367    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:39:10.108360    4721 logs.go:276] 4 containers: [5a30533944e5 3f8b26d3d50c 01000004d151 e8cac5e28698]
	I1001 12:39:10.108447    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:39:10.119469    4721 logs.go:276] 1 containers: [1c059d37a4b1]
	I1001 12:39:10.119536    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:39:10.130021    4721 logs.go:276] 1 containers: [3fa320c5a26b]
	I1001 12:39:10.130107    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:39:10.140451    4721 logs.go:276] 1 containers: [38de77956fbf]
	I1001 12:39:10.140537    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:39:10.150314    4721 logs.go:276] 0 containers: []
	W1001 12:39:10.150327    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:39:10.150400    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:39:10.161014    4721 logs.go:276] 1 containers: [86bbc96e040d]
	I1001 12:39:10.161033    4721 logs.go:123] Gathering logs for kube-proxy [3fa320c5a26b] ...
	I1001 12:39:10.161038    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa320c5a26b"
	I1001 12:39:10.172927    4721 logs.go:123] Gathering logs for storage-provisioner [86bbc96e040d] ...
	I1001 12:39:10.172940    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86bbc96e040d"
	I1001 12:39:10.188267    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:39:10.188278    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:39:10.213451    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:39:10.213460    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:39:10.248744    4721 logs.go:123] Gathering logs for kube-apiserver [b39b64d2ea7c] ...
	I1001 12:39:10.248754    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b39b64d2ea7c"
	I1001 12:39:10.263881    4721 logs.go:123] Gathering logs for coredns [01000004d151] ...
	I1001 12:39:10.263895    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01000004d151"
	I1001 12:39:10.275620    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:39:10.275634    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:39:10.287010    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:39:10.287024    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:39:10.321537    4721 logs.go:123] Gathering logs for coredns [e8cac5e28698] ...
	I1001 12:39:10.321553    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8cac5e28698"
	I1001 12:39:10.333905    4721 logs.go:123] Gathering logs for kube-scheduler [1c059d37a4b1] ...
	I1001 12:39:10.333921    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c059d37a4b1"
	I1001 12:39:10.348826    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:39:10.348842    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:39:10.353467    4721 logs.go:123] Gathering logs for etcd [b8681044ebb8] ...
	I1001 12:39:10.353477    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8681044ebb8"
	I1001 12:39:10.367116    4721 logs.go:123] Gathering logs for coredns [5a30533944e5] ...
	I1001 12:39:10.367131    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a30533944e5"
	I1001 12:39:10.377996    4721 logs.go:123] Gathering logs for coredns [3f8b26d3d50c] ...
	I1001 12:39:10.378007    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f8b26d3d50c"
	I1001 12:39:10.389337    4721 logs.go:123] Gathering logs for kube-controller-manager [38de77956fbf] ...
	I1001 12:39:10.389347    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38de77956fbf"
	I1001 12:39:12.908828    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:39:17.911454    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:39:17.912057    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:39:17.949026    4721 logs.go:276] 1 containers: [b39b64d2ea7c]
	I1001 12:39:17.949187    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:39:17.973204    4721 logs.go:276] 1 containers: [b8681044ebb8]
	I1001 12:39:17.973317    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:39:17.989474    4721 logs.go:276] 4 containers: [5a30533944e5 3f8b26d3d50c 01000004d151 e8cac5e28698]
	I1001 12:39:17.989586    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:39:18.002314    4721 logs.go:276] 1 containers: [1c059d37a4b1]
	I1001 12:39:18.002398    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:39:18.012863    4721 logs.go:276] 1 containers: [3fa320c5a26b]
	I1001 12:39:18.012952    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:39:18.023597    4721 logs.go:276] 1 containers: [38de77956fbf]
	I1001 12:39:18.023676    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:39:18.033927    4721 logs.go:276] 0 containers: []
	W1001 12:39:18.033943    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:39:18.034007    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:39:18.044424    4721 logs.go:276] 1 containers: [86bbc96e040d]
	I1001 12:39:18.044440    4721 logs.go:123] Gathering logs for coredns [3f8b26d3d50c] ...
	I1001 12:39:18.044446    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f8b26d3d50c"
	I1001 12:39:18.056193    4721 logs.go:123] Gathering logs for kube-scheduler [1c059d37a4b1] ...
	I1001 12:39:18.056203    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c059d37a4b1"
	I1001 12:39:18.070486    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:39:18.070497    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:39:18.094462    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:39:18.094473    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:39:18.137417    4721 logs.go:123] Gathering logs for kube-apiserver [b39b64d2ea7c] ...
	I1001 12:39:18.137432    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b39b64d2ea7c"
	I1001 12:39:18.151275    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:39:18.151285    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:39:18.155466    4721 logs.go:123] Gathering logs for coredns [01000004d151] ...
	I1001 12:39:18.155472    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01000004d151"
	I1001 12:39:18.166768    4721 logs.go:123] Gathering logs for kube-proxy [3fa320c5a26b] ...
	I1001 12:39:18.166780    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa320c5a26b"
	I1001 12:39:18.178827    4721 logs.go:123] Gathering logs for kube-controller-manager [38de77956fbf] ...
	I1001 12:39:18.178843    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38de77956fbf"
	I1001 12:39:18.196080    4721 logs.go:123] Gathering logs for storage-provisioner [86bbc96e040d] ...
	I1001 12:39:18.196091    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86bbc96e040d"
	I1001 12:39:18.207438    4721 logs.go:123] Gathering logs for etcd [b8681044ebb8] ...
	I1001 12:39:18.207454    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8681044ebb8"
	I1001 12:39:18.221495    4721 logs.go:123] Gathering logs for coredns [5a30533944e5] ...
	I1001 12:39:18.221507    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a30533944e5"
	I1001 12:39:18.233107    4721 logs.go:123] Gathering logs for coredns [e8cac5e28698] ...
	I1001 12:39:18.233118    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8cac5e28698"
	I1001 12:39:18.244994    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:39:18.245008    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:39:18.257208    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:39:18.257222    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:39:20.792396    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:39:25.794872    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:39:25.795121    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:39:25.817409    4721 logs.go:276] 1 containers: [b39b64d2ea7c]
	I1001 12:39:25.817519    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:39:25.832377    4721 logs.go:276] 1 containers: [b8681044ebb8]
	I1001 12:39:25.832471    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:39:25.844556    4721 logs.go:276] 4 containers: [5a30533944e5 3f8b26d3d50c 01000004d151 e8cac5e28698]
	I1001 12:39:25.844643    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:39:25.855611    4721 logs.go:276] 1 containers: [1c059d37a4b1]
	I1001 12:39:25.855689    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:39:25.866175    4721 logs.go:276] 1 containers: [3fa320c5a26b]
	I1001 12:39:25.866258    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:39:25.876651    4721 logs.go:276] 1 containers: [38de77956fbf]
	I1001 12:39:25.876733    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:39:25.886456    4721 logs.go:276] 0 containers: []
	W1001 12:39:25.886471    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:39:25.886547    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:39:25.896704    4721 logs.go:276] 1 containers: [86bbc96e040d]
	I1001 12:39:25.896722    4721 logs.go:123] Gathering logs for coredns [5a30533944e5] ...
	I1001 12:39:25.896728    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a30533944e5"
	I1001 12:39:25.908155    4721 logs.go:123] Gathering logs for kube-controller-manager [38de77956fbf] ...
	I1001 12:39:25.908165    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38de77956fbf"
	I1001 12:39:25.925347    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:39:25.925360    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:39:25.930102    4721 logs.go:123] Gathering logs for kube-apiserver [b39b64d2ea7c] ...
	I1001 12:39:25.930110    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b39b64d2ea7c"
	I1001 12:39:25.950937    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:39:25.950952    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:39:25.962079    4721 logs.go:123] Gathering logs for kube-scheduler [1c059d37a4b1] ...
	I1001 12:39:25.962093    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c059d37a4b1"
	I1001 12:39:25.976656    4721 logs.go:123] Gathering logs for kube-proxy [3fa320c5a26b] ...
	I1001 12:39:25.976667    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa320c5a26b"
	I1001 12:39:25.987850    4721 logs.go:123] Gathering logs for etcd [b8681044ebb8] ...
	I1001 12:39:25.987863    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8681044ebb8"
	I1001 12:39:26.002114    4721 logs.go:123] Gathering logs for coredns [01000004d151] ...
	I1001 12:39:26.002128    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01000004d151"
	I1001 12:39:26.013896    4721 logs.go:123] Gathering logs for coredns [3f8b26d3d50c] ...
	I1001 12:39:26.013909    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f8b26d3d50c"
	I1001 12:39:26.027935    4721 logs.go:123] Gathering logs for coredns [e8cac5e28698] ...
	I1001 12:39:26.027948    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8cac5e28698"
	I1001 12:39:26.039197    4721 logs.go:123] Gathering logs for storage-provisioner [86bbc96e040d] ...
	I1001 12:39:26.039208    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86bbc96e040d"
	I1001 12:39:26.050455    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:39:26.050468    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:39:26.074167    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:39:26.074177    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:39:26.106829    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:39:26.106836    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:39:28.648485    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:39:33.651054    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:39:33.651634    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:39:33.692191    4721 logs.go:276] 1 containers: [b39b64d2ea7c]
	I1001 12:39:33.692344    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:39:33.713219    4721 logs.go:276] 1 containers: [b8681044ebb8]
	I1001 12:39:33.713330    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:39:33.732889    4721 logs.go:276] 4 containers: [5a30533944e5 3f8b26d3d50c 01000004d151 e8cac5e28698]
	I1001 12:39:33.732986    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:39:33.744722    4721 logs.go:276] 1 containers: [1c059d37a4b1]
	I1001 12:39:33.744807    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:39:33.755125    4721 logs.go:276] 1 containers: [3fa320c5a26b]
	I1001 12:39:33.755207    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:39:33.765880    4721 logs.go:276] 1 containers: [38de77956fbf]
	I1001 12:39:33.765952    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:39:33.778372    4721 logs.go:276] 0 containers: []
	W1001 12:39:33.778391    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:39:33.778465    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:39:33.789198    4721 logs.go:276] 1 containers: [86bbc96e040d]
	I1001 12:39:33.789222    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:39:33.789227    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:39:33.824483    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:39:33.824493    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:39:33.828795    4721 logs.go:123] Gathering logs for etcd [b8681044ebb8] ...
	I1001 12:39:33.828804    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8681044ebb8"
	I1001 12:39:33.849673    4721 logs.go:123] Gathering logs for coredns [e8cac5e28698] ...
	I1001 12:39:33.849683    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8cac5e28698"
	I1001 12:39:33.863971    4721 logs.go:123] Gathering logs for kube-scheduler [1c059d37a4b1] ...
	I1001 12:39:33.863988    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c059d37a4b1"
	I1001 12:39:33.878620    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:39:33.878632    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:39:33.902130    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:39:33.902137    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:39:33.937127    4721 logs.go:123] Gathering logs for storage-provisioner [86bbc96e040d] ...
	I1001 12:39:33.937139    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86bbc96e040d"
	I1001 12:39:33.948873    4721 logs.go:123] Gathering logs for kube-apiserver [b39b64d2ea7c] ...
	I1001 12:39:33.948885    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b39b64d2ea7c"
	I1001 12:39:33.963095    4721 logs.go:123] Gathering logs for coredns [3f8b26d3d50c] ...
	I1001 12:39:33.963107    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f8b26d3d50c"
	I1001 12:39:33.974744    4721 logs.go:123] Gathering logs for coredns [01000004d151] ...
	I1001 12:39:33.974756    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01000004d151"
	I1001 12:39:33.986582    4721 logs.go:123] Gathering logs for kube-proxy [3fa320c5a26b] ...
	I1001 12:39:33.986595    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa320c5a26b"
	I1001 12:39:33.998730    4721 logs.go:123] Gathering logs for kube-controller-manager [38de77956fbf] ...
	I1001 12:39:33.998746    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38de77956fbf"
	I1001 12:39:34.016588    4721 logs.go:123] Gathering logs for coredns [5a30533944e5] ...
	I1001 12:39:34.016599    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a30533944e5"
	I1001 12:39:34.027565    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:39:34.027579    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:39:36.540882    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:39:41.543432    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:39:41.543514    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:39:41.554774    4721 logs.go:276] 1 containers: [b39b64d2ea7c]
	I1001 12:39:41.554856    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:39:41.565887    4721 logs.go:276] 1 containers: [b8681044ebb8]
	I1001 12:39:41.565965    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:39:41.576672    4721 logs.go:276] 4 containers: [5a30533944e5 3f8b26d3d50c 01000004d151 e8cac5e28698]
	I1001 12:39:41.576742    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:39:41.587967    4721 logs.go:276] 1 containers: [1c059d37a4b1]
	I1001 12:39:41.588036    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:39:41.599222    4721 logs.go:276] 1 containers: [3fa320c5a26b]
	I1001 12:39:41.599291    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:39:41.609729    4721 logs.go:276] 1 containers: [38de77956fbf]
	I1001 12:39:41.609808    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:39:41.620118    4721 logs.go:276] 0 containers: []
	W1001 12:39:41.620132    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:39:41.620204    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:39:41.635495    4721 logs.go:276] 1 containers: [86bbc96e040d]
	I1001 12:39:41.635513    4721 logs.go:123] Gathering logs for kube-scheduler [1c059d37a4b1] ...
	I1001 12:39:41.635518    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c059d37a4b1"
	I1001 12:39:41.650318    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:39:41.650329    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:39:41.661913    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:39:41.661926    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:39:41.695981    4721 logs.go:123] Gathering logs for coredns [01000004d151] ...
	I1001 12:39:41.695993    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01000004d151"
	I1001 12:39:41.707774    4721 logs.go:123] Gathering logs for coredns [e8cac5e28698] ...
	I1001 12:39:41.707788    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8cac5e28698"
	I1001 12:39:41.719785    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:39:41.719799    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:39:41.756488    4721 logs.go:123] Gathering logs for coredns [5a30533944e5] ...
	I1001 12:39:41.756499    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a30533944e5"
	I1001 12:39:41.768570    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:39:41.768582    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:39:41.792500    4721 logs.go:123] Gathering logs for kube-controller-manager [38de77956fbf] ...
	I1001 12:39:41.792511    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38de77956fbf"
	I1001 12:39:41.809840    4721 logs.go:123] Gathering logs for storage-provisioner [86bbc96e040d] ...
	I1001 12:39:41.809850    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86bbc96e040d"
	I1001 12:39:41.821723    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:39:41.821736    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:39:41.826288    4721 logs.go:123] Gathering logs for etcd [b8681044ebb8] ...
	I1001 12:39:41.826298    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8681044ebb8"
	I1001 12:39:41.841912    4721 logs.go:123] Gathering logs for kube-proxy [3fa320c5a26b] ...
	I1001 12:39:41.841929    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa320c5a26b"
	I1001 12:39:41.853557    4721 logs.go:123] Gathering logs for kube-apiserver [b39b64d2ea7c] ...
	I1001 12:39:41.853573    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b39b64d2ea7c"
	I1001 12:39:41.868353    4721 logs.go:123] Gathering logs for coredns [3f8b26d3d50c] ...
	I1001 12:39:41.868367    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f8b26d3d50c"
	I1001 12:39:44.381833    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:39:49.384585    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:39:49.384884    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:39:49.412160    4721 logs.go:276] 1 containers: [b39b64d2ea7c]
	I1001 12:39:49.412301    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:39:49.432328    4721 logs.go:276] 1 containers: [b8681044ebb8]
	I1001 12:39:49.432428    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:39:49.445125    4721 logs.go:276] 4 containers: [5a30533944e5 3f8b26d3d50c 01000004d151 e8cac5e28698]
	I1001 12:39:49.445200    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:39:49.456670    4721 logs.go:276] 1 containers: [1c059d37a4b1]
	I1001 12:39:49.456751    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:39:49.466985    4721 logs.go:276] 1 containers: [3fa320c5a26b]
	I1001 12:39:49.467061    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:39:49.477072    4721 logs.go:276] 1 containers: [38de77956fbf]
	I1001 12:39:49.477146    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:39:49.487448    4721 logs.go:276] 0 containers: []
	W1001 12:39:49.487460    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:39:49.487532    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:39:49.497623    4721 logs.go:276] 1 containers: [86bbc96e040d]
	I1001 12:39:49.497642    4721 logs.go:123] Gathering logs for coredns [01000004d151] ...
	I1001 12:39:49.497647    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01000004d151"
	I1001 12:39:49.509578    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:39:49.509588    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:39:49.521057    4721 logs.go:123] Gathering logs for coredns [e8cac5e28698] ...
	I1001 12:39:49.521066    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8cac5e28698"
	I1001 12:39:49.533223    4721 logs.go:123] Gathering logs for kube-scheduler [1c059d37a4b1] ...
	I1001 12:39:49.533235    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c059d37a4b1"
	I1001 12:39:49.547675    4721 logs.go:123] Gathering logs for kube-proxy [3fa320c5a26b] ...
	I1001 12:39:49.547684    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa320c5a26b"
	I1001 12:39:49.558781    4721 logs.go:123] Gathering logs for storage-provisioner [86bbc96e040d] ...
	I1001 12:39:49.558793    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86bbc96e040d"
	I1001 12:39:49.575255    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:39:49.575266    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:39:49.609280    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:39:49.609289    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:39:49.613346    4721 logs.go:123] Gathering logs for kube-apiserver [b39b64d2ea7c] ...
	I1001 12:39:49.613355    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b39b64d2ea7c"
	I1001 12:39:49.627146    4721 logs.go:123] Gathering logs for kube-controller-manager [38de77956fbf] ...
	I1001 12:39:49.627157    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38de77956fbf"
	I1001 12:39:49.644326    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:39:49.644336    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:39:49.680458    4721 logs.go:123] Gathering logs for etcd [b8681044ebb8] ...
	I1001 12:39:49.680470    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8681044ebb8"
	I1001 12:39:49.694580    4721 logs.go:123] Gathering logs for coredns [5a30533944e5] ...
	I1001 12:39:49.694593    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a30533944e5"
	I1001 12:39:49.706896    4721 logs.go:123] Gathering logs for coredns [3f8b26d3d50c] ...
	I1001 12:39:49.706909    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f8b26d3d50c"
	I1001 12:39:49.718200    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:39:49.718209    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:39:52.245675    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:39:57.247725    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:39:57.247853    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:39:57.259392    4721 logs.go:276] 1 containers: [b39b64d2ea7c]
	I1001 12:39:57.259480    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:39:57.269674    4721 logs.go:276] 1 containers: [b8681044ebb8]
	I1001 12:39:57.269751    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:39:57.280219    4721 logs.go:276] 4 containers: [5a30533944e5 3f8b26d3d50c 01000004d151 e8cac5e28698]
	I1001 12:39:57.280294    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:39:57.290950    4721 logs.go:276] 1 containers: [1c059d37a4b1]
	I1001 12:39:57.291029    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:39:57.301122    4721 logs.go:276] 1 containers: [3fa320c5a26b]
	I1001 12:39:57.301204    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:39:57.311878    4721 logs.go:276] 1 containers: [38de77956fbf]
	I1001 12:39:57.311967    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:39:57.322080    4721 logs.go:276] 0 containers: []
	W1001 12:39:57.322094    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:39:57.322165    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:39:57.332715    4721 logs.go:276] 1 containers: [86bbc96e040d]
	I1001 12:39:57.332733    4721 logs.go:123] Gathering logs for coredns [01000004d151] ...
	I1001 12:39:57.332738    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01000004d151"
	I1001 12:39:57.344075    4721 logs.go:123] Gathering logs for kube-scheduler [1c059d37a4b1] ...
	I1001 12:39:57.344089    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c059d37a4b1"
	I1001 12:39:57.358631    4721 logs.go:123] Gathering logs for coredns [5a30533944e5] ...
	I1001 12:39:57.358643    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a30533944e5"
	I1001 12:39:57.370128    4721 logs.go:123] Gathering logs for coredns [3f8b26d3d50c] ...
	I1001 12:39:57.370138    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f8b26d3d50c"
	I1001 12:39:57.381719    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:39:57.381733    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:39:57.416311    4721 logs.go:123] Gathering logs for kube-controller-manager [38de77956fbf] ...
	I1001 12:39:57.416326    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38de77956fbf"
	I1001 12:39:57.433349    4721 logs.go:123] Gathering logs for kube-proxy [3fa320c5a26b] ...
	I1001 12:39:57.433362    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa320c5a26b"
	I1001 12:39:57.445159    4721 logs.go:123] Gathering logs for kube-apiserver [b39b64d2ea7c] ...
	I1001 12:39:57.445174    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b39b64d2ea7c"
	I1001 12:39:57.459196    4721 logs.go:123] Gathering logs for etcd [b8681044ebb8] ...
	I1001 12:39:57.459210    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8681044ebb8"
	I1001 12:39:57.472899    4721 logs.go:123] Gathering logs for coredns [e8cac5e28698] ...
	I1001 12:39:57.472911    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8cac5e28698"
	I1001 12:39:57.484219    4721 logs.go:123] Gathering logs for storage-provisioner [86bbc96e040d] ...
	I1001 12:39:57.484230    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86bbc96e040d"
	I1001 12:39:57.495515    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:39:57.495529    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:39:57.519106    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:39:57.519115    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:39:57.530594    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:39:57.530608    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:39:57.534622    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:39:57.534631    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:40:00.070287    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:40:05.071500    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:40:05.071677    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:40:05.091217    4721 logs.go:276] 1 containers: [b39b64d2ea7c]
	I1001 12:40:05.091276    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:40:05.102599    4721 logs.go:276] 1 containers: [b8681044ebb8]
	I1001 12:40:05.102686    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:40:05.114946    4721 logs.go:276] 4 containers: [5a30533944e5 3f8b26d3d50c 01000004d151 e8cac5e28698]
	I1001 12:40:05.115027    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:40:05.126945    4721 logs.go:276] 1 containers: [1c059d37a4b1]
	I1001 12:40:05.127003    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:40:05.137762    4721 logs.go:276] 1 containers: [3fa320c5a26b]
	I1001 12:40:05.137842    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:40:05.149195    4721 logs.go:276] 1 containers: [38de77956fbf]
	I1001 12:40:05.149260    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:40:05.161658    4721 logs.go:276] 0 containers: []
	W1001 12:40:05.161673    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:40:05.161760    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:40:05.173143    4721 logs.go:276] 1 containers: [86bbc96e040d]
	I1001 12:40:05.173160    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:40:05.173165    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:40:05.210689    4721 logs.go:123] Gathering logs for etcd [b8681044ebb8] ...
	I1001 12:40:05.210702    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8681044ebb8"
	I1001 12:40:05.229528    4721 logs.go:123] Gathering logs for coredns [01000004d151] ...
	I1001 12:40:05.229538    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01000004d151"
	I1001 12:40:05.242200    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:40:05.242211    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:40:05.247514    4721 logs.go:123] Gathering logs for kube-scheduler [1c059d37a4b1] ...
	I1001 12:40:05.247524    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c059d37a4b1"
	I1001 12:40:05.263958    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:40:05.263968    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:40:05.288226    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:40:05.288242    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:40:05.301973    4721 logs.go:123] Gathering logs for coredns [e8cac5e28698] ...
	I1001 12:40:05.301989    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8cac5e28698"
	I1001 12:40:05.315347    4721 logs.go:123] Gathering logs for kube-proxy [3fa320c5a26b] ...
	I1001 12:40:05.315371    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa320c5a26b"
	I1001 12:40:05.328142    4721 logs.go:123] Gathering logs for kube-controller-manager [38de77956fbf] ...
	I1001 12:40:05.328154    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38de77956fbf"
	I1001 12:40:05.348824    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:40:05.348833    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:40:05.384048    4721 logs.go:123] Gathering logs for kube-apiserver [b39b64d2ea7c] ...
	I1001 12:40:05.384067    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b39b64d2ea7c"
	I1001 12:40:05.399548    4721 logs.go:123] Gathering logs for coredns [5a30533944e5] ...
	I1001 12:40:05.399556    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a30533944e5"
	I1001 12:40:05.412114    4721 logs.go:123] Gathering logs for coredns [3f8b26d3d50c] ...
	I1001 12:40:05.412129    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f8b26d3d50c"
	I1001 12:40:05.425444    4721 logs.go:123] Gathering logs for storage-provisioner [86bbc96e040d] ...
	I1001 12:40:05.425459    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86bbc96e040d"
	I1001 12:40:07.940431    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:40:12.943149    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:40:12.943792    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:40:12.991839    4721 logs.go:276] 1 containers: [b39b64d2ea7c]
	I1001 12:40:12.991988    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:40:13.018763    4721 logs.go:276] 1 containers: [b8681044ebb8]
	I1001 12:40:13.018850    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:40:13.033036    4721 logs.go:276] 4 containers: [5a30533944e5 3f8b26d3d50c 01000004d151 e8cac5e28698]
	I1001 12:40:13.033129    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:40:13.045963    4721 logs.go:276] 1 containers: [1c059d37a4b1]
	I1001 12:40:13.046062    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:40:13.058969    4721 logs.go:276] 1 containers: [3fa320c5a26b]
	I1001 12:40:13.059048    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:40:13.071415    4721 logs.go:276] 1 containers: [38de77956fbf]
	I1001 12:40:13.071499    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:40:13.088252    4721 logs.go:276] 0 containers: []
	W1001 12:40:13.088265    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:40:13.088338    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:40:13.100706    4721 logs.go:276] 1 containers: [86bbc96e040d]
	I1001 12:40:13.100726    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:40:13.100732    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:40:13.114332    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:40:13.114350    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:40:13.151166    4721 logs.go:123] Gathering logs for coredns [5a30533944e5] ...
	I1001 12:40:13.151188    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a30533944e5"
	I1001 12:40:13.164492    4721 logs.go:123] Gathering logs for coredns [e8cac5e28698] ...
	I1001 12:40:13.164507    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8cac5e28698"
	I1001 12:40:13.178445    4721 logs.go:123] Gathering logs for kube-apiserver [b39b64d2ea7c] ...
	I1001 12:40:13.178461    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b39b64d2ea7c"
	I1001 12:40:13.197301    4721 logs.go:123] Gathering logs for etcd [b8681044ebb8] ...
	I1001 12:40:13.197316    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8681044ebb8"
	I1001 12:40:13.211722    4721 logs.go:123] Gathering logs for kube-controller-manager [38de77956fbf] ...
	I1001 12:40:13.211737    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38de77956fbf"
	I1001 12:40:13.230094    4721 logs.go:123] Gathering logs for kube-scheduler [1c059d37a4b1] ...
	I1001 12:40:13.230107    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c059d37a4b1"
	I1001 12:40:13.244458    4721 logs.go:123] Gathering logs for kube-proxy [3fa320c5a26b] ...
	I1001 12:40:13.244469    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa320c5a26b"
	I1001 12:40:13.256697    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:40:13.256709    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:40:13.281311    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:40:13.281320    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:40:13.285570    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:40:13.285580    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:40:13.321732    4721 logs.go:123] Gathering logs for coredns [01000004d151] ...
	I1001 12:40:13.321743    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01000004d151"
	I1001 12:40:13.333355    4721 logs.go:123] Gathering logs for coredns [3f8b26d3d50c] ...
	I1001 12:40:13.333368    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f8b26d3d50c"
	I1001 12:40:13.344919    4721 logs.go:123] Gathering logs for storage-provisioner [86bbc96e040d] ...
	I1001 12:40:13.344930    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86bbc96e040d"
	I1001 12:40:15.858674    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:40:20.860841    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:40:20.861389    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:40:20.903048    4721 logs.go:276] 1 containers: [b39b64d2ea7c]
	I1001 12:40:20.903201    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:40:20.926067    4721 logs.go:276] 1 containers: [b8681044ebb8]
	I1001 12:40:20.926202    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:40:20.942227    4721 logs.go:276] 4 containers: [5a30533944e5 3f8b26d3d50c 01000004d151 e8cac5e28698]
	I1001 12:40:20.942322    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:40:20.954629    4721 logs.go:276] 1 containers: [1c059d37a4b1]
	I1001 12:40:20.954709    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:40:20.965715    4721 logs.go:276] 1 containers: [3fa320c5a26b]
	I1001 12:40:20.965789    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:40:20.978120    4721 logs.go:276] 1 containers: [38de77956fbf]
	I1001 12:40:20.978204    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:40:20.988175    4721 logs.go:276] 0 containers: []
	W1001 12:40:20.988187    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:40:20.988258    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:40:20.998704    4721 logs.go:276] 1 containers: [86bbc96e040d]
	I1001 12:40:20.998725    4721 logs.go:123] Gathering logs for etcd [b8681044ebb8] ...
	I1001 12:40:20.998731    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8681044ebb8"
	I1001 12:40:21.013135    4721 logs.go:123] Gathering logs for coredns [3f8b26d3d50c] ...
	I1001 12:40:21.013145    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f8b26d3d50c"
	I1001 12:40:21.025498    4721 logs.go:123] Gathering logs for kube-controller-manager [38de77956fbf] ...
	I1001 12:40:21.025512    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38de77956fbf"
	I1001 12:40:21.043154    4721 logs.go:123] Gathering logs for storage-provisioner [86bbc96e040d] ...
	I1001 12:40:21.043168    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86bbc96e040d"
	I1001 12:40:21.054693    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:40:21.054704    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:40:21.065951    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:40:21.065961    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:40:21.100494    4721 logs.go:123] Gathering logs for coredns [5a30533944e5] ...
	I1001 12:40:21.100505    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a30533944e5"
	I1001 12:40:21.113828    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:40:21.113839    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:40:21.137211    4721 logs.go:123] Gathering logs for kube-apiserver [b39b64d2ea7c] ...
	I1001 12:40:21.137220    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b39b64d2ea7c"
	I1001 12:40:21.154783    4721 logs.go:123] Gathering logs for kube-proxy [3fa320c5a26b] ...
	I1001 12:40:21.154796    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa320c5a26b"
	I1001 12:40:21.167579    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:40:21.167591    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:40:21.172537    4721 logs.go:123] Gathering logs for coredns [01000004d151] ...
	I1001 12:40:21.172546    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01000004d151"
	I1001 12:40:21.183707    4721 logs.go:123] Gathering logs for coredns [e8cac5e28698] ...
	I1001 12:40:21.183718    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8cac5e28698"
	I1001 12:40:21.195630    4721 logs.go:123] Gathering logs for kube-scheduler [1c059d37a4b1] ...
	I1001 12:40:21.195642    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c059d37a4b1"
	I1001 12:40:21.215330    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:40:21.215356    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:40:23.752764    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:40:28.754285    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:40:28.754399    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:40:28.766320    4721 logs.go:276] 1 containers: [b39b64d2ea7c]
	I1001 12:40:28.766405    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:40:28.777853    4721 logs.go:276] 1 containers: [b8681044ebb8]
	I1001 12:40:28.777952    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:40:28.790393    4721 logs.go:276] 4 containers: [5a30533944e5 3f8b26d3d50c 01000004d151 e8cac5e28698]
	I1001 12:40:28.790479    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:40:28.801336    4721 logs.go:276] 1 containers: [1c059d37a4b1]
	I1001 12:40:28.801433    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:40:28.813072    4721 logs.go:276] 1 containers: [3fa320c5a26b]
	I1001 12:40:28.813158    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:40:28.825073    4721 logs.go:276] 1 containers: [38de77956fbf]
	I1001 12:40:28.825161    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:40:28.836653    4721 logs.go:276] 0 containers: []
	W1001 12:40:28.836665    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:40:28.836720    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:40:28.848678    4721 logs.go:276] 1 containers: [86bbc96e040d]
	I1001 12:40:28.848695    4721 logs.go:123] Gathering logs for coredns [3f8b26d3d50c] ...
	I1001 12:40:28.848701    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f8b26d3d50c"
	I1001 12:40:28.861171    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:40:28.861179    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:40:28.865353    4721 logs.go:123] Gathering logs for etcd [b8681044ebb8] ...
	I1001 12:40:28.865361    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8681044ebb8"
	I1001 12:40:28.880680    4721 logs.go:123] Gathering logs for coredns [5a30533944e5] ...
	I1001 12:40:28.880693    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a30533944e5"
	I1001 12:40:28.896300    4721 logs.go:123] Gathering logs for kube-controller-manager [38de77956fbf] ...
	I1001 12:40:28.896310    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38de77956fbf"
	I1001 12:40:28.913870    4721 logs.go:123] Gathering logs for storage-provisioner [86bbc96e040d] ...
	I1001 12:40:28.913886    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86bbc96e040d"
	I1001 12:40:28.928600    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:40:28.928613    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:40:28.963565    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:40:28.963584    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:40:29.006034    4721 logs.go:123] Gathering logs for kube-apiserver [b39b64d2ea7c] ...
	I1001 12:40:29.006048    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b39b64d2ea7c"
	I1001 12:40:29.021151    4721 logs.go:123] Gathering logs for coredns [01000004d151] ...
	I1001 12:40:29.021163    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01000004d151"
	I1001 12:40:29.033963    4721 logs.go:123] Gathering logs for kube-proxy [3fa320c5a26b] ...
	I1001 12:40:29.033971    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa320c5a26b"
	I1001 12:40:29.045839    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:40:29.045854    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:40:29.071011    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:40:29.071033    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:40:29.084569    4721 logs.go:123] Gathering logs for coredns [e8cac5e28698] ...
	I1001 12:40:29.084582    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8cac5e28698"
	I1001 12:40:29.097634    4721 logs.go:123] Gathering logs for kube-scheduler [1c059d37a4b1] ...
	I1001 12:40:29.097645    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c059d37a4b1"
	I1001 12:40:31.618957    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:40:36.621511    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:40:36.621646    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:40:36.632888    4721 logs.go:276] 1 containers: [b39b64d2ea7c]
	I1001 12:40:36.632976    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:40:36.642989    4721 logs.go:276] 1 containers: [b8681044ebb8]
	I1001 12:40:36.643071    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:40:36.653741    4721 logs.go:276] 4 containers: [5a30533944e5 3f8b26d3d50c 01000004d151 e8cac5e28698]
	I1001 12:40:36.653829    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:40:36.664115    4721 logs.go:276] 1 containers: [1c059d37a4b1]
	I1001 12:40:36.664188    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:40:36.674683    4721 logs.go:276] 1 containers: [3fa320c5a26b]
	I1001 12:40:36.674764    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:40:36.685088    4721 logs.go:276] 1 containers: [38de77956fbf]
	I1001 12:40:36.685173    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:40:36.695385    4721 logs.go:276] 0 containers: []
	W1001 12:40:36.695396    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:40:36.695456    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:40:36.705606    4721 logs.go:276] 1 containers: [86bbc96e040d]
	I1001 12:40:36.705623    4721 logs.go:123] Gathering logs for coredns [e8cac5e28698] ...
	I1001 12:40:36.705629    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8cac5e28698"
	I1001 12:40:36.717571    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:40:36.717586    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:40:36.741531    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:40:36.741541    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:40:36.745522    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:40:36.745528    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:40:36.778451    4721 logs.go:123] Gathering logs for etcd [b8681044ebb8] ...
	I1001 12:40:36.778465    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8681044ebb8"
	I1001 12:40:36.792341    4721 logs.go:123] Gathering logs for coredns [5a30533944e5] ...
	I1001 12:40:36.792354    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a30533944e5"
	I1001 12:40:36.804697    4721 logs.go:123] Gathering logs for kube-proxy [3fa320c5a26b] ...
	I1001 12:40:36.804711    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa320c5a26b"
	I1001 12:40:36.816453    4721 logs.go:123] Gathering logs for kube-apiserver [b39b64d2ea7c] ...
	I1001 12:40:36.816468    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b39b64d2ea7c"
	I1001 12:40:36.830754    4721 logs.go:123] Gathering logs for coredns [3f8b26d3d50c] ...
	I1001 12:40:36.830768    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f8b26d3d50c"
	I1001 12:40:36.847513    4721 logs.go:123] Gathering logs for coredns [01000004d151] ...
	I1001 12:40:36.847523    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01000004d151"
	I1001 12:40:36.859741    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:40:36.859756    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:40:36.894318    4721 logs.go:123] Gathering logs for kube-scheduler [1c059d37a4b1] ...
	I1001 12:40:36.894327    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c059d37a4b1"
	I1001 12:40:36.908628    4721 logs.go:123] Gathering logs for kube-controller-manager [38de77956fbf] ...
	I1001 12:40:36.908637    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38de77956fbf"
	I1001 12:40:36.926041    4721 logs.go:123] Gathering logs for storage-provisioner [86bbc96e040d] ...
	I1001 12:40:36.926052    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86bbc96e040d"
	I1001 12:40:36.937635    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:40:36.937653    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:40:39.450923    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:40:44.453547    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:40:44.453709    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:40:44.465307    4721 logs.go:276] 1 containers: [b39b64d2ea7c]
	I1001 12:40:44.465398    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:40:44.475514    4721 logs.go:276] 1 containers: [b8681044ebb8]
	I1001 12:40:44.475585    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:40:44.486077    4721 logs.go:276] 4 containers: [5a30533944e5 3f8b26d3d50c 01000004d151 e8cac5e28698]
	I1001 12:40:44.486149    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:40:44.496819    4721 logs.go:276] 1 containers: [1c059d37a4b1]
	I1001 12:40:44.496903    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:40:44.507737    4721 logs.go:276] 1 containers: [3fa320c5a26b]
	I1001 12:40:44.507814    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:40:44.518293    4721 logs.go:276] 1 containers: [38de77956fbf]
	I1001 12:40:44.518362    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:40:44.528473    4721 logs.go:276] 0 containers: []
	W1001 12:40:44.528486    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:40:44.528555    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:40:44.539157    4721 logs.go:276] 1 containers: [86bbc96e040d]
	I1001 12:40:44.539175    4721 logs.go:123] Gathering logs for kube-apiserver [b39b64d2ea7c] ...
	I1001 12:40:44.539180    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b39b64d2ea7c"
	I1001 12:40:44.554213    4721 logs.go:123] Gathering logs for etcd [b8681044ebb8] ...
	I1001 12:40:44.554229    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8681044ebb8"
	I1001 12:40:44.567733    4721 logs.go:123] Gathering logs for kube-controller-manager [38de77956fbf] ...
	I1001 12:40:44.567744    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38de77956fbf"
	I1001 12:40:44.585487    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:40:44.585501    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:40:44.597022    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:40:44.597037    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:40:44.632770    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:40:44.632784    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:40:44.636992    4721 logs.go:123] Gathering logs for coredns [01000004d151] ...
	I1001 12:40:44.637002    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 01000004d151"
	I1001 12:40:44.648120    4721 logs.go:123] Gathering logs for coredns [e8cac5e28698] ...
	I1001 12:40:44.648131    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8cac5e28698"
	I1001 12:40:44.660465    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:40:44.660479    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:40:44.685529    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:40:44.685540    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:40:44.719712    4721 logs.go:123] Gathering logs for coredns [3f8b26d3d50c] ...
	I1001 12:40:44.719723    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f8b26d3d50c"
	I1001 12:40:44.731502    4721 logs.go:123] Gathering logs for kube-scheduler [1c059d37a4b1] ...
	I1001 12:40:44.731517    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c059d37a4b1"
	I1001 12:40:44.745545    4721 logs.go:123] Gathering logs for kube-proxy [3fa320c5a26b] ...
	I1001 12:40:44.745555    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa320c5a26b"
	I1001 12:40:44.757542    4721 logs.go:123] Gathering logs for storage-provisioner [86bbc96e040d] ...
	I1001 12:40:44.757553    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86bbc96e040d"
	I1001 12:40:44.769167    4721 logs.go:123] Gathering logs for coredns [5a30533944e5] ...
	I1001 12:40:44.769177    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a30533944e5"
	I1001 12:40:47.282636    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:40:52.285264    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:40:52.285363    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 12:40:52.298079    4721 logs.go:276] 1 containers: [b39b64d2ea7c]
	I1001 12:40:52.298159    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 12:40:52.309430    4721 logs.go:276] 1 containers: [b8681044ebb8]
	I1001 12:40:52.309530    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 12:40:52.320485    4721 logs.go:276] 4 containers: [6d31f8e50eea f64d5f27dae9 5a30533944e5 3f8b26d3d50c]
	I1001 12:40:52.320572    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 12:40:52.332246    4721 logs.go:276] 1 containers: [1c059d37a4b1]
	I1001 12:40:52.332335    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 12:40:52.345625    4721 logs.go:276] 1 containers: [3fa320c5a26b]
	I1001 12:40:52.345710    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 12:40:52.356880    4721 logs.go:276] 1 containers: [38de77956fbf]
	I1001 12:40:52.356955    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 12:40:52.367583    4721 logs.go:276] 0 containers: []
	W1001 12:40:52.367595    4721 logs.go:278] No container was found matching "kindnet"
	I1001 12:40:52.367681    4721 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 12:40:52.379242    4721 logs.go:276] 1 containers: [86bbc96e040d]
	I1001 12:40:52.379261    4721 logs.go:123] Gathering logs for dmesg ...
	I1001 12:40:52.379267    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 12:40:52.384377    4721 logs.go:123] Gathering logs for coredns [3f8b26d3d50c] ...
	I1001 12:40:52.384386    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3f8b26d3d50c"
	I1001 12:40:52.396963    4721 logs.go:123] Gathering logs for Docker ...
	I1001 12:40:52.396981    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 12:40:52.421043    4721 logs.go:123] Gathering logs for kubelet ...
	I1001 12:40:52.421062    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 12:40:52.456266    4721 logs.go:123] Gathering logs for coredns [f64d5f27dae9] ...
	I1001 12:40:52.456285    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64d5f27dae9"
	I1001 12:40:52.468129    4721 logs.go:123] Gathering logs for kube-controller-manager [38de77956fbf] ...
	I1001 12:40:52.468141    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38de77956fbf"
	I1001 12:40:52.487438    4721 logs.go:123] Gathering logs for etcd [b8681044ebb8] ...
	I1001 12:40:52.487449    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8681044ebb8"
	I1001 12:40:52.502248    4721 logs.go:123] Gathering logs for coredns [6d31f8e50eea] ...
	I1001 12:40:52.502262    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6d31f8e50eea"
	I1001 12:40:52.522582    4721 logs.go:123] Gathering logs for kube-scheduler [1c059d37a4b1] ...
	I1001 12:40:52.522591    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c059d37a4b1"
	I1001 12:40:52.539462    4721 logs.go:123] Gathering logs for kube-proxy [3fa320c5a26b] ...
	I1001 12:40:52.539484    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fa320c5a26b"
	I1001 12:40:52.552768    4721 logs.go:123] Gathering logs for storage-provisioner [86bbc96e040d] ...
	I1001 12:40:52.552779    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86bbc96e040d"
	I1001 12:40:52.565902    4721 logs.go:123] Gathering logs for kube-apiserver [b39b64d2ea7c] ...
	I1001 12:40:52.565914    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b39b64d2ea7c"
	I1001 12:40:52.581321    4721 logs.go:123] Gathering logs for coredns [5a30533944e5] ...
	I1001 12:40:52.581333    4721 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a30533944e5"
	I1001 12:40:52.596518    4721 logs.go:123] Gathering logs for container status ...
	I1001 12:40:52.596530    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 12:40:52.611619    4721 logs.go:123] Gathering logs for describe nodes ...
	I1001 12:40:52.611633    4721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 12:40:55.152786    4721 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 12:41:00.155321    4721 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 12:41:00.162663    4721 out.go:201] 
	W1001 12:41:00.166591    4721 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1001 12:41:00.166620    4721 out.go:270] * 
	* 
	W1001 12:41:00.169439    4721 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 12:41:00.183864    4721 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-340000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (582.77s)

                                                
                                    
x
+
TestPause/serial/Start (10.2s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-031000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-031000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (10.141186083s)

                                                
                                                
-- stdout --
	* [pause-031000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-031000" primary control-plane node in "pause-031000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-031000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-031000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-031000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-031000 -n pause-031000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-031000 -n pause-031000: exit status 7 (53.35875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-031000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (10.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-870000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-870000 --driver=qemu2 : exit status 80 (9.920627167s)

                                                
                                                
-- stdout --
	* [NoKubernetes-870000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-870000" primary control-plane node in "NoKubernetes-870000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-870000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-870000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-870000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-870000 -n NoKubernetes-870000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-870000 -n NoKubernetes-870000: exit status 7 (54.604083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-870000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-870000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-870000 --no-kubernetes --driver=qemu2 : exit status 80 (5.795398333s)

                                                
                                                
-- stdout --
	* [NoKubernetes-870000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-870000
	* Restarting existing qemu2 VM for "NoKubernetes-870000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-870000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-870000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-870000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-870000 -n NoKubernetes-870000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-870000 -n NoKubernetes-870000: exit status 7 (65.305417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-870000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-870000 --no-kubernetes --driver=qemu2 
E1001 12:38:45.820379    1595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/addons-075000/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-870000 --no-kubernetes --driver=qemu2 : exit status 80 (5.811479375s)

                                                
                                                
-- stdout --
	* [NoKubernetes-870000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-870000
	* Restarting existing qemu2 VM for "NoKubernetes-870000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-870000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-870000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-870000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-870000 -n NoKubernetes-870000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-870000 -n NoKubernetes-870000: exit status 7 (67.102917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-870000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-870000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-870000 --driver=qemu2 : exit status 80 (5.812687958s)

                                                
                                                
-- stdout --
	* [NoKubernetes-870000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-870000
	* Restarting existing qemu2 VM for "NoKubernetes-870000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-870000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-870000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-870000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-870000 -n NoKubernetes-870000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-870000 -n NoKubernetes-870000: exit status 7 (29.443958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-870000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-298000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-298000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.905770959s)

                                                
                                                
-- stdout --
	* [auto-298000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-298000" primary control-plane node in "auto-298000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-298000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:39:29.144328    4991 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:39:29.144440    4991 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:39:29.144444    4991 out.go:358] Setting ErrFile to fd 2...
	I1001 12:39:29.144446    4991 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:39:29.144564    4991 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:39:29.145645    4991 out.go:352] Setting JSON to false
	I1001 12:39:29.161766    4991 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4134,"bootTime":1727807435,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1001 12:39:29.161842    4991 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 12:39:29.169052    4991 out.go:177] * [auto-298000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 12:39:29.175921    4991 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 12:39:29.175951    4991 notify.go:220] Checking for updates...
	I1001 12:39:29.181834    4991 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	I1001 12:39:29.184889    4991 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 12:39:29.186411    4991 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 12:39:29.189844    4991 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	I1001 12:39:29.192886    4991 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 12:39:29.196231    4991 config.go:182] Loaded profile config "multinode-301000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:39:29.196293    4991 config.go:182] Loaded profile config "stopped-upgrade-340000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1001 12:39:29.196334    4991 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 12:39:29.200874    4991 out.go:177] * Using the qemu2 driver based on user configuration
	I1001 12:39:29.207802    4991 start.go:297] selected driver: qemu2
	I1001 12:39:29.207810    4991 start.go:901] validating driver "qemu2" against <nil>
	I1001 12:39:29.207818    4991 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 12:39:29.210005    4991 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 12:39:29.213918    4991 out.go:177] * Automatically selected the socket_vmnet network
	I1001 12:39:29.216957    4991 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 12:39:29.216971    4991 cni.go:84] Creating CNI manager for ""
	I1001 12:39:29.216990    4991 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 12:39:29.216994    4991 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1001 12:39:29.217026    4991 start.go:340] cluster config:
	{Name:auto-298000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:auto-298000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_cli
ent SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 12:39:29.220376    4991 iso.go:125] acquiring lock: {Name:mk749d3a5db31c259cbd6465e91cf5073e7cc750 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:39:29.226891    4991 out.go:177] * Starting "auto-298000" primary control-plane node in "auto-298000" cluster
	I1001 12:39:29.230894    4991 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 12:39:29.230908    4991 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1001 12:39:29.230916    4991 cache.go:56] Caching tarball of preloaded images
	I1001 12:39:29.230987    4991 preload.go:172] Found /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 12:39:29.230994    4991 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1001 12:39:29.231051    4991 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/auto-298000/config.json ...
	I1001 12:39:29.231061    4991 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/auto-298000/config.json: {Name:mkdf83e4865c4d6ea5c9d90b2fa41aeca4f8fddf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 12:39:29.231325    4991 start.go:360] acquireMachinesLock for auto-298000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:39:29.231354    4991 start.go:364] duration metric: took 24.083µs to acquireMachinesLock for "auto-298000"
	I1001 12:39:29.231364    4991 start.go:93] Provisioning new machine with config: &{Name:auto-298000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:auto-298000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 12:39:29.231390    4991 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 12:39:29.239851    4991 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1001 12:39:29.255005    4991 start.go:159] libmachine.API.Create for "auto-298000" (driver="qemu2")
	I1001 12:39:29.255034    4991 client.go:168] LocalClient.Create starting
	I1001 12:39:29.255101    4991 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem
	I1001 12:39:29.255143    4991 main.go:141] libmachine: Decoding PEM data...
	I1001 12:39:29.255157    4991 main.go:141] libmachine: Parsing certificate...
	I1001 12:39:29.255205    4991 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/cert.pem
	I1001 12:39:29.255233    4991 main.go:141] libmachine: Decoding PEM data...
	I1001 12:39:29.255244    4991 main.go:141] libmachine: Parsing certificate...
	I1001 12:39:29.255585    4991 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19736-1073/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 12:39:29.416973    4991 main.go:141] libmachine: Creating SSH key...
	I1001 12:39:29.526299    4991 main.go:141] libmachine: Creating Disk image...
	I1001 12:39:29.526306    4991 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 12:39:29.526494    4991 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/auto-298000/disk.qcow2.raw /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/auto-298000/disk.qcow2
	I1001 12:39:29.535573    4991 main.go:141] libmachine: STDOUT: 
	I1001 12:39:29.535592    4991 main.go:141] libmachine: STDERR: 
	I1001 12:39:29.535653    4991 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/auto-298000/disk.qcow2 +20000M
	I1001 12:39:29.543424    4991 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 12:39:29.543440    4991 main.go:141] libmachine: STDERR: 
	I1001 12:39:29.543456    4991 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/auto-298000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/auto-298000/disk.qcow2
	I1001 12:39:29.543463    4991 main.go:141] libmachine: Starting QEMU VM...
	I1001 12:39:29.543475    4991 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:39:29.543505    4991 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/auto-298000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/auto-298000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/auto-298000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:ef:f8:f2:a0:18 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/auto-298000/disk.qcow2
	I1001 12:39:29.545054    4991 main.go:141] libmachine: STDOUT: 
	I1001 12:39:29.545122    4991 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:39:29.545141    4991 client.go:171] duration metric: took 290.108708ms to LocalClient.Create
	I1001 12:39:31.547317    4991 start.go:128] duration metric: took 2.315952458s to createHost
	I1001 12:39:31.547393    4991 start.go:83] releasing machines lock for "auto-298000", held for 2.316089917s
	W1001 12:39:31.547530    4991 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:39:31.563764    4991 out.go:177] * Deleting "auto-298000" in qemu2 ...
	W1001 12:39:31.601091    4991 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:39:31.601118    4991 start.go:729] Will try again in 5 seconds ...
	I1001 12:39:36.603202    4991 start.go:360] acquireMachinesLock for auto-298000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:39:36.603736    4991 start.go:364] duration metric: took 415.625µs to acquireMachinesLock for "auto-298000"
	I1001 12:39:36.603808    4991 start.go:93] Provisioning new machine with config: &{Name:auto-298000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:auto-298000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 12:39:36.604081    4991 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 12:39:36.612856    4991 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1001 12:39:36.656105    4991 start.go:159] libmachine.API.Create for "auto-298000" (driver="qemu2")
	I1001 12:39:36.656153    4991 client.go:168] LocalClient.Create starting
	I1001 12:39:36.656283    4991 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem
	I1001 12:39:36.656364    4991 main.go:141] libmachine: Decoding PEM data...
	I1001 12:39:36.656379    4991 main.go:141] libmachine: Parsing certificate...
	I1001 12:39:36.656437    4991 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/cert.pem
	I1001 12:39:36.656477    4991 main.go:141] libmachine: Decoding PEM data...
	I1001 12:39:36.656488    4991 main.go:141] libmachine: Parsing certificate...
	I1001 12:39:36.657074    4991 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19736-1073/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 12:39:36.821651    4991 main.go:141] libmachine: Creating SSH key...
	I1001 12:39:36.958309    4991 main.go:141] libmachine: Creating Disk image...
	I1001 12:39:36.958317    4991 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 12:39:36.958545    4991 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/auto-298000/disk.qcow2.raw /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/auto-298000/disk.qcow2
	I1001 12:39:36.967941    4991 main.go:141] libmachine: STDOUT: 
	I1001 12:39:36.967963    4991 main.go:141] libmachine: STDERR: 
	I1001 12:39:36.968024    4991 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/auto-298000/disk.qcow2 +20000M
	I1001 12:39:36.975959    4991 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 12:39:36.975974    4991 main.go:141] libmachine: STDERR: 
	I1001 12:39:36.975987    4991 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/auto-298000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/auto-298000/disk.qcow2
	I1001 12:39:36.975993    4991 main.go:141] libmachine: Starting QEMU VM...
	I1001 12:39:36.976005    4991 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:39:36.976039    4991 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/auto-298000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/auto-298000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/auto-298000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:fe:d1:2a:e9:bb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/auto-298000/disk.qcow2
	I1001 12:39:36.977644    4991 main.go:141] libmachine: STDOUT: 
	I1001 12:39:36.977659    4991 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:39:36.977671    4991 client.go:171] duration metric: took 321.520375ms to LocalClient.Create
	I1001 12:39:38.979799    4991 start.go:128] duration metric: took 2.375732708s to createHost
	I1001 12:39:38.979871    4991 start.go:83] releasing machines lock for "auto-298000", held for 2.376167875s
	W1001 12:39:38.980258    4991 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-298000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-298000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:39:38.991520    4991 out.go:201] 
	W1001 12:39:38.994829    4991 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 12:39:38.994901    4991 out.go:270] * 
	* 
	W1001 12:39:38.997239    4991 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 12:39:39.007765    4991 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-298000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-298000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.876428875s)

                                                
                                                
-- stdout --
	* [kindnet-298000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-298000" primary control-plane node in "kindnet-298000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-298000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:39:41.179997    5104 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:39:41.180150    5104 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:39:41.180153    5104 out.go:358] Setting ErrFile to fd 2...
	I1001 12:39:41.180155    5104 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:39:41.180303    5104 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:39:41.181431    5104 out.go:352] Setting JSON to false
	I1001 12:39:41.198285    5104 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4146,"bootTime":1727807435,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1001 12:39:41.198350    5104 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 12:39:41.205913    5104 out.go:177] * [kindnet-298000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 12:39:41.213852    5104 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 12:39:41.213877    5104 notify.go:220] Checking for updates...
	I1001 12:39:41.219850    5104 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	I1001 12:39:41.222831    5104 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 12:39:41.225860    5104 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 12:39:41.228857    5104 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	I1001 12:39:41.231794    5104 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 12:39:41.235152    5104 config.go:182] Loaded profile config "multinode-301000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:39:41.235219    5104 config.go:182] Loaded profile config "stopped-upgrade-340000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1001 12:39:41.235257    5104 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 12:39:41.239857    5104 out.go:177] * Using the qemu2 driver based on user configuration
	I1001 12:39:41.246810    5104 start.go:297] selected driver: qemu2
	I1001 12:39:41.246815    5104 start.go:901] validating driver "qemu2" against <nil>
	I1001 12:39:41.246828    5104 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 12:39:41.248834    5104 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 12:39:41.251851    5104 out.go:177] * Automatically selected the socket_vmnet network
	I1001 12:39:41.253310    5104 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 12:39:41.253326    5104 cni.go:84] Creating CNI manager for "kindnet"
	I1001 12:39:41.253329    5104 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1001 12:39:41.253356    5104 start.go:340] cluster config:
	{Name:kindnet-298000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kindnet-298000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/soc
ket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 12:39:41.256917    5104 iso.go:125] acquiring lock: {Name:mk749d3a5db31c259cbd6465e91cf5073e7cc750 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:39:41.264856    5104 out.go:177] * Starting "kindnet-298000" primary control-plane node in "kindnet-298000" cluster
	I1001 12:39:41.270787    5104 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 12:39:41.270815    5104 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1001 12:39:41.270825    5104 cache.go:56] Caching tarball of preloaded images
	I1001 12:39:41.270902    5104 preload.go:172] Found /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 12:39:41.270909    5104 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1001 12:39:41.270965    5104 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/kindnet-298000/config.json ...
	I1001 12:39:41.270976    5104 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/kindnet-298000/config.json: {Name:mkf38759b3c6067f05a1b1eaf88092ac2aa94b4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 12:39:41.271190    5104 start.go:360] acquireMachinesLock for kindnet-298000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:39:41.271220    5104 start.go:364] duration metric: took 24.75µs to acquireMachinesLock for "kindnet-298000"
	I1001 12:39:41.271232    5104 start.go:93] Provisioning new machine with config: &{Name:kindnet-298000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:kindnet-298000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 12:39:41.271267    5104 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 12:39:41.274885    5104 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1001 12:39:41.290359    5104 start.go:159] libmachine.API.Create for "kindnet-298000" (driver="qemu2")
	I1001 12:39:41.290387    5104 client.go:168] LocalClient.Create starting
	I1001 12:39:41.290448    5104 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem
	I1001 12:39:41.290477    5104 main.go:141] libmachine: Decoding PEM data...
	I1001 12:39:41.290486    5104 main.go:141] libmachine: Parsing certificate...
	I1001 12:39:41.290533    5104 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/cert.pem
	I1001 12:39:41.290555    5104 main.go:141] libmachine: Decoding PEM data...
	I1001 12:39:41.290566    5104 main.go:141] libmachine: Parsing certificate...
	I1001 12:39:41.290967    5104 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19736-1073/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 12:39:41.452368    5104 main.go:141] libmachine: Creating SSH key...
	I1001 12:39:41.580287    5104 main.go:141] libmachine: Creating Disk image...
	I1001 12:39:41.580295    5104 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 12:39:41.580537    5104 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kindnet-298000/disk.qcow2.raw /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kindnet-298000/disk.qcow2
	I1001 12:39:41.590857    5104 main.go:141] libmachine: STDOUT: 
	I1001 12:39:41.590881    5104 main.go:141] libmachine: STDERR: 
	I1001 12:39:41.590946    5104 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kindnet-298000/disk.qcow2 +20000M
	I1001 12:39:41.600105    5104 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 12:39:41.600124    5104 main.go:141] libmachine: STDERR: 
	I1001 12:39:41.600151    5104 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kindnet-298000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kindnet-298000/disk.qcow2
	I1001 12:39:41.600157    5104 main.go:141] libmachine: Starting QEMU VM...
	I1001 12:39:41.600180    5104 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:39:41.600247    5104 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kindnet-298000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kindnet-298000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kindnet-298000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:60:c8:73:e5:ec -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kindnet-298000/disk.qcow2
	I1001 12:39:41.602240    5104 main.go:141] libmachine: STDOUT: 
	I1001 12:39:41.602257    5104 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:39:41.602289    5104 client.go:171] duration metric: took 311.903292ms to LocalClient.Create
	I1001 12:39:43.604580    5104 start.go:128] duration metric: took 2.333338042s to createHost
	I1001 12:39:43.604699    5104 start.go:83] releasing machines lock for "kindnet-298000", held for 2.333528333s
	W1001 12:39:43.604814    5104 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:39:43.615989    5104 out.go:177] * Deleting "kindnet-298000" in qemu2 ...
	W1001 12:39:43.656923    5104 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:39:43.656953    5104 start.go:729] Will try again in 5 seconds ...
	I1001 12:39:48.657206    5104 start.go:360] acquireMachinesLock for kindnet-298000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:39:48.657617    5104 start.go:364] duration metric: took 329.959µs to acquireMachinesLock for "kindnet-298000"
	I1001 12:39:48.657722    5104 start.go:93] Provisioning new machine with config: &{Name:kindnet-298000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:kindnet-298000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 12:39:48.657958    5104 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 12:39:48.665656    5104 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1001 12:39:48.706839    5104 start.go:159] libmachine.API.Create for "kindnet-298000" (driver="qemu2")
	I1001 12:39:48.706906    5104 client.go:168] LocalClient.Create starting
	I1001 12:39:48.707036    5104 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem
	I1001 12:39:48.707090    5104 main.go:141] libmachine: Decoding PEM data...
	I1001 12:39:48.707105    5104 main.go:141] libmachine: Parsing certificate...
	I1001 12:39:48.707195    5104 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/cert.pem
	I1001 12:39:48.707252    5104 main.go:141] libmachine: Decoding PEM data...
	I1001 12:39:48.707267    5104 main.go:141] libmachine: Parsing certificate...
	I1001 12:39:48.707823    5104 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19736-1073/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 12:39:48.873386    5104 main.go:141] libmachine: Creating SSH key...
	I1001 12:39:48.946734    5104 main.go:141] libmachine: Creating Disk image...
	I1001 12:39:48.946748    5104 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 12:39:48.946994    5104 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kindnet-298000/disk.qcow2.raw /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kindnet-298000/disk.qcow2
	I1001 12:39:48.956856    5104 main.go:141] libmachine: STDOUT: 
	I1001 12:39:48.956876    5104 main.go:141] libmachine: STDERR: 
	I1001 12:39:48.956939    5104 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kindnet-298000/disk.qcow2 +20000M
	I1001 12:39:48.965040    5104 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 12:39:48.965057    5104 main.go:141] libmachine: STDERR: 
	I1001 12:39:48.965070    5104 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kindnet-298000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kindnet-298000/disk.qcow2
	I1001 12:39:48.965075    5104 main.go:141] libmachine: Starting QEMU VM...
	I1001 12:39:48.965092    5104 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:39:48.965114    5104 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kindnet-298000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kindnet-298000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kindnet-298000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:e0:2a:9d:b2:a4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kindnet-298000/disk.qcow2
	I1001 12:39:48.966866    5104 main.go:141] libmachine: STDOUT: 
	I1001 12:39:48.966882    5104 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:39:48.966893    5104 client.go:171] duration metric: took 259.969709ms to LocalClient.Create
	I1001 12:39:50.969062    5104 start.go:128] duration metric: took 2.311123792s to createHost
	I1001 12:39:50.969188    5104 start.go:83] releasing machines lock for "kindnet-298000", held for 2.311597584s
	W1001 12:39:50.969591    5104 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-298000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-298000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:39:50.987411    5104 out.go:201] 
	W1001 12:39:50.994446    5104 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 12:39:50.994472    5104 out.go:270] * 
	* 
	W1001 12:39:50.997112    5104 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 12:39:51.012356    5104 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-298000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-298000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.872048334s)

                                                
                                                
-- stdout --
	* [calico-298000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-298000" primary control-plane node in "calico-298000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-298000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:39:53.274543    5226 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:39:53.274684    5226 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:39:53.274688    5226 out.go:358] Setting ErrFile to fd 2...
	I1001 12:39:53.274690    5226 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:39:53.274815    5226 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:39:53.275864    5226 out.go:352] Setting JSON to false
	I1001 12:39:53.292180    5226 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4158,"bootTime":1727807435,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1001 12:39:53.292259    5226 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 12:39:53.299379    5226 out.go:177] * [calico-298000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 12:39:53.306284    5226 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 12:39:53.306365    5226 notify.go:220] Checking for updates...
	I1001 12:39:53.312251    5226 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	I1001 12:39:53.315231    5226 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 12:39:53.318240    5226 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 12:39:53.321273    5226 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	I1001 12:39:53.322675    5226 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 12:39:53.326567    5226 config.go:182] Loaded profile config "multinode-301000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:39:53.326635    5226 config.go:182] Loaded profile config "stopped-upgrade-340000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1001 12:39:53.326680    5226 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 12:39:53.331274    5226 out.go:177] * Using the qemu2 driver based on user configuration
	I1001 12:39:53.337226    5226 start.go:297] selected driver: qemu2
	I1001 12:39:53.337233    5226 start.go:901] validating driver "qemu2" against <nil>
	I1001 12:39:53.337241    5226 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 12:39:53.339331    5226 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 12:39:53.342246    5226 out.go:177] * Automatically selected the socket_vmnet network
	I1001 12:39:53.345374    5226 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 12:39:53.345387    5226 cni.go:84] Creating CNI manager for "calico"
	I1001 12:39:53.345391    5226 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I1001 12:39:53.345423    5226 start.go:340] cluster config:
	{Name:calico-298000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:calico-298000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 12:39:53.348897    5226 iso.go:125] acquiring lock: {Name:mk749d3a5db31c259cbd6465e91cf5073e7cc750 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:39:53.356291    5226 out.go:177] * Starting "calico-298000" primary control-plane node in "calico-298000" cluster
	I1001 12:39:53.360258    5226 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 12:39:53.360280    5226 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1001 12:39:53.360288    5226 cache.go:56] Caching tarball of preloaded images
	I1001 12:39:53.360363    5226 preload.go:172] Found /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 12:39:53.360369    5226 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1001 12:39:53.360434    5226 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/calico-298000/config.json ...
	I1001 12:39:53.360444    5226 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/calico-298000/config.json: {Name:mkf408dd119b947b744f5eba93ebd817632cce60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 12:39:53.360657    5226 start.go:360] acquireMachinesLock for calico-298000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:39:53.360688    5226 start.go:364] duration metric: took 24.459µs to acquireMachinesLock for "calico-298000"
	I1001 12:39:53.360699    5226 start.go:93] Provisioning new machine with config: &{Name:calico-298000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:calico-298000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 12:39:53.360731    5226 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 12:39:53.369233    5226 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1001 12:39:53.384543    5226 start.go:159] libmachine.API.Create for "calico-298000" (driver="qemu2")
	I1001 12:39:53.384570    5226 client.go:168] LocalClient.Create starting
	I1001 12:39:53.384641    5226 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem
	I1001 12:39:53.384670    5226 main.go:141] libmachine: Decoding PEM data...
	I1001 12:39:53.384678    5226 main.go:141] libmachine: Parsing certificate...
	I1001 12:39:53.384720    5226 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/cert.pem
	I1001 12:39:53.384742    5226 main.go:141] libmachine: Decoding PEM data...
	I1001 12:39:53.384749    5226 main.go:141] libmachine: Parsing certificate...
	I1001 12:39:53.385136    5226 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19736-1073/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 12:39:53.545094    5226 main.go:141] libmachine: Creating SSH key...
	I1001 12:39:53.719550    5226 main.go:141] libmachine: Creating Disk image...
	I1001 12:39:53.719561    5226 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 12:39:53.719769    5226 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/calico-298000/disk.qcow2.raw /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/calico-298000/disk.qcow2
	I1001 12:39:53.729674    5226 main.go:141] libmachine: STDOUT: 
	I1001 12:39:53.729700    5226 main.go:141] libmachine: STDERR: 
	I1001 12:39:53.729768    5226 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/calico-298000/disk.qcow2 +20000M
	I1001 12:39:53.738028    5226 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 12:39:53.738047    5226 main.go:141] libmachine: STDERR: 
	I1001 12:39:53.738073    5226 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/calico-298000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/calico-298000/disk.qcow2
	I1001 12:39:53.738078    5226 main.go:141] libmachine: Starting QEMU VM...
	I1001 12:39:53.738088    5226 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:39:53.738120    5226 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/calico-298000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/calico-298000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/calico-298000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:27:e1:84:ea:d3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/calico-298000/disk.qcow2
	I1001 12:39:53.739825    5226 main.go:141] libmachine: STDOUT: 
	I1001 12:39:53.739855    5226 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:39:53.739883    5226 client.go:171] duration metric: took 355.317ms to LocalClient.Create
	I1001 12:39:55.742104    5226 start.go:128] duration metric: took 2.381394334s to createHost
	I1001 12:39:55.742212    5226 start.go:83] releasing machines lock for "calico-298000", held for 2.381577166s
	W1001 12:39:55.742276    5226 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:39:55.758258    5226 out.go:177] * Deleting "calico-298000" in qemu2 ...
	W1001 12:39:55.789892    5226 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:39:55.789919    5226 start.go:729] Will try again in 5 seconds ...
	I1001 12:40:00.791932    5226 start.go:360] acquireMachinesLock for calico-298000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:40:00.792263    5226 start.go:364] duration metric: took 271.625µs to acquireMachinesLock for "calico-298000"
	I1001 12:40:00.792306    5226 start.go:93] Provisioning new machine with config: &{Name:calico-298000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:calico-298000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 12:40:00.792503    5226 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 12:40:00.803959    5226 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1001 12:40:00.838985    5226 start.go:159] libmachine.API.Create for "calico-298000" (driver="qemu2")
	I1001 12:40:00.839037    5226 client.go:168] LocalClient.Create starting
	I1001 12:40:00.839135    5226 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem
	I1001 12:40:00.839201    5226 main.go:141] libmachine: Decoding PEM data...
	I1001 12:40:00.839218    5226 main.go:141] libmachine: Parsing certificate...
	I1001 12:40:00.839273    5226 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/cert.pem
	I1001 12:40:00.839314    5226 main.go:141] libmachine: Decoding PEM data...
	I1001 12:40:00.839329    5226 main.go:141] libmachine: Parsing certificate...
	I1001 12:40:00.839935    5226 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19736-1073/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 12:40:01.008112    5226 main.go:141] libmachine: Creating SSH key...
	I1001 12:40:01.058734    5226 main.go:141] libmachine: Creating Disk image...
	I1001 12:40:01.058743    5226 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 12:40:01.058956    5226 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/calico-298000/disk.qcow2.raw /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/calico-298000/disk.qcow2
	I1001 12:40:01.068806    5226 main.go:141] libmachine: STDOUT: 
	I1001 12:40:01.068822    5226 main.go:141] libmachine: STDERR: 
	I1001 12:40:01.068910    5226 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/calico-298000/disk.qcow2 +20000M
	I1001 12:40:01.077272    5226 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 12:40:01.077287    5226 main.go:141] libmachine: STDERR: 
	I1001 12:40:01.077299    5226 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/calico-298000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/calico-298000/disk.qcow2
	I1001 12:40:01.077304    5226 main.go:141] libmachine: Starting QEMU VM...
	I1001 12:40:01.077317    5226 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:40:01.077358    5226 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/calico-298000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/calico-298000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/calico-298000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:c7:29:b8:1c:41 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/calico-298000/disk.qcow2
	I1001 12:40:01.079060    5226 main.go:141] libmachine: STDOUT: 
	I1001 12:40:01.079073    5226 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:40:01.079085    5226 client.go:171] duration metric: took 240.048416ms to LocalClient.Create
	I1001 12:40:03.081126    5226 start.go:128] duration metric: took 2.288665458s to createHost
	I1001 12:40:03.081167    5226 start.go:83] releasing machines lock for "calico-298000", held for 2.288947125s
	W1001 12:40:03.081324    5226 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-298000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-298000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:40:03.089728    5226 out.go:201] 
	W1001 12:40:03.098857    5226 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 12:40:03.098866    5226 out.go:270] * 
	* 
	W1001 12:40:03.099635    5226 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 12:40:03.106705    5226 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-298000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-298000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.842731792s)

                                                
                                                
-- stdout --
	* [custom-flannel-298000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-298000" primary control-plane node in "custom-flannel-298000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-298000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:40:05.549340    5349 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:40:05.549480    5349 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:40:05.549487    5349 out.go:358] Setting ErrFile to fd 2...
	I1001 12:40:05.549490    5349 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:40:05.549635    5349 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:40:05.550737    5349 out.go:352] Setting JSON to false
	I1001 12:40:05.567052    5349 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4170,"bootTime":1727807435,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1001 12:40:05.567130    5349 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 12:40:05.571892    5349 out.go:177] * [custom-flannel-298000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 12:40:05.580756    5349 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 12:40:05.580815    5349 notify.go:220] Checking for updates...
	I1001 12:40:05.586714    5349 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	I1001 12:40:05.589715    5349 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 12:40:05.592744    5349 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 12:40:05.595703    5349 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	I1001 12:40:05.598734    5349 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 12:40:05.602069    5349 config.go:182] Loaded profile config "multinode-301000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:40:05.602135    5349 config.go:182] Loaded profile config "stopped-upgrade-340000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1001 12:40:05.602192    5349 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 12:40:05.606762    5349 out.go:177] * Using the qemu2 driver based on user configuration
	I1001 12:40:05.626723    5349 start.go:297] selected driver: qemu2
	I1001 12:40:05.626729    5349 start.go:901] validating driver "qemu2" against <nil>
	I1001 12:40:05.626736    5349 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 12:40:05.629005    5349 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 12:40:05.632717    5349 out.go:177] * Automatically selected the socket_vmnet network
	I1001 12:40:05.635856    5349 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 12:40:05.635884    5349 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1001 12:40:05.635893    5349 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1001 12:40:05.635934    5349 start.go:340] cluster config:
	{Name:custom-flannel-298000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-298000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetCli
entPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 12:40:05.640002    5349 iso.go:125] acquiring lock: {Name:mk749d3a5db31c259cbd6465e91cf5073e7cc750 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:40:05.647762    5349 out.go:177] * Starting "custom-flannel-298000" primary control-plane node in "custom-flannel-298000" cluster
	I1001 12:40:05.650702    5349 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 12:40:05.650726    5349 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1001 12:40:05.650737    5349 cache.go:56] Caching tarball of preloaded images
	I1001 12:40:05.650821    5349 preload.go:172] Found /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 12:40:05.650828    5349 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1001 12:40:05.650894    5349 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/custom-flannel-298000/config.json ...
	I1001 12:40:05.650910    5349 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/custom-flannel-298000/config.json: {Name:mkecd9416f4f369c5eb5bc7540983d6f3c9be924 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 12:40:05.651351    5349 start.go:360] acquireMachinesLock for custom-flannel-298000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:40:05.651397    5349 start.go:364] duration metric: took 32.125µs to acquireMachinesLock for "custom-flannel-298000"
	I1001 12:40:05.651410    5349 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-298000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-298000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 12:40:05.651440    5349 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 12:40:05.658714    5349 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1001 12:40:05.678128    5349 start.go:159] libmachine.API.Create for "custom-flannel-298000" (driver="qemu2")
	I1001 12:40:05.678169    5349 client.go:168] LocalClient.Create starting
	I1001 12:40:05.678241    5349 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem
	I1001 12:40:05.678275    5349 main.go:141] libmachine: Decoding PEM data...
	I1001 12:40:05.678285    5349 main.go:141] libmachine: Parsing certificate...
	I1001 12:40:05.678330    5349 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/cert.pem
	I1001 12:40:05.678355    5349 main.go:141] libmachine: Decoding PEM data...
	I1001 12:40:05.678363    5349 main.go:141] libmachine: Parsing certificate...
	I1001 12:40:05.678820    5349 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19736-1073/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 12:40:05.838772    5349 main.go:141] libmachine: Creating SSH key...
	I1001 12:40:05.922244    5349 main.go:141] libmachine: Creating Disk image...
	I1001 12:40:05.922253    5349 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 12:40:05.922441    5349 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/custom-flannel-298000/disk.qcow2.raw /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/custom-flannel-298000/disk.qcow2
	I1001 12:40:05.931797    5349 main.go:141] libmachine: STDOUT: 
	I1001 12:40:05.931817    5349 main.go:141] libmachine: STDERR: 
	I1001 12:40:05.931891    5349 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/custom-flannel-298000/disk.qcow2 +20000M
	I1001 12:40:05.939862    5349 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 12:40:05.939880    5349 main.go:141] libmachine: STDERR: 
	I1001 12:40:05.939898    5349 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/custom-flannel-298000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/custom-flannel-298000/disk.qcow2
	I1001 12:40:05.939903    5349 main.go:141] libmachine: Starting QEMU VM...
	I1001 12:40:05.939914    5349 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:40:05.939939    5349 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/custom-flannel-298000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/custom-flannel-298000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/custom-flannel-298000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:a2:63:ed:24:61 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/custom-flannel-298000/disk.qcow2
	I1001 12:40:05.941634    5349 main.go:141] libmachine: STDOUT: 
	I1001 12:40:05.941649    5349 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:40:05.941670    5349 client.go:171] duration metric: took 263.501875ms to LocalClient.Create
	I1001 12:40:07.943822    5349 start.go:128] duration metric: took 2.292418791s to createHost
	I1001 12:40:07.943910    5349 start.go:83] releasing machines lock for "custom-flannel-298000", held for 2.292551625s
	W1001 12:40:07.944015    5349 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:40:07.954916    5349 out.go:177] * Deleting "custom-flannel-298000" in qemu2 ...
	W1001 12:40:07.991703    5349 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:40:07.991729    5349 start.go:729] Will try again in 5 seconds ...
	I1001 12:40:12.992012    5349 start.go:360] acquireMachinesLock for custom-flannel-298000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:40:12.992175    5349 start.go:364] duration metric: took 137.458µs to acquireMachinesLock for "custom-flannel-298000"
	I1001 12:40:12.992217    5349 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-298000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-298000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 12:40:12.992278    5349 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 12:40:13.001567    5349 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1001 12:40:13.023985    5349 start.go:159] libmachine.API.Create for "custom-flannel-298000" (driver="qemu2")
	I1001 12:40:13.024018    5349 client.go:168] LocalClient.Create starting
	I1001 12:40:13.024095    5349 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem
	I1001 12:40:13.024138    5349 main.go:141] libmachine: Decoding PEM data...
	I1001 12:40:13.024153    5349 main.go:141] libmachine: Parsing certificate...
	I1001 12:40:13.024194    5349 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/cert.pem
	I1001 12:40:13.024228    5349 main.go:141] libmachine: Decoding PEM data...
	I1001 12:40:13.024237    5349 main.go:141] libmachine: Parsing certificate...
	I1001 12:40:13.024724    5349 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19736-1073/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 12:40:13.186268    5349 main.go:141] libmachine: Creating SSH key...
	I1001 12:40:13.295152    5349 main.go:141] libmachine: Creating Disk image...
	I1001 12:40:13.295164    5349 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 12:40:13.295413    5349 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/custom-flannel-298000/disk.qcow2.raw /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/custom-flannel-298000/disk.qcow2
	I1001 12:40:13.305656    5349 main.go:141] libmachine: STDOUT: 
	I1001 12:40:13.305689    5349 main.go:141] libmachine: STDERR: 
	I1001 12:40:13.305771    5349 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/custom-flannel-298000/disk.qcow2 +20000M
	I1001 12:40:13.315185    5349 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 12:40:13.315213    5349 main.go:141] libmachine: STDERR: 
	I1001 12:40:13.315231    5349 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/custom-flannel-298000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/custom-flannel-298000/disk.qcow2
	I1001 12:40:13.315237    5349 main.go:141] libmachine: Starting QEMU VM...
	I1001 12:40:13.315243    5349 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:40:13.315270    5349 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/custom-flannel-298000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/custom-flannel-298000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/custom-flannel-298000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:7a:47:5f:10:66 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/custom-flannel-298000/disk.qcow2
	I1001 12:40:13.317457    5349 main.go:141] libmachine: STDOUT: 
	I1001 12:40:13.317474    5349 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:40:13.317491    5349 client.go:171] duration metric: took 293.474625ms to LocalClient.Create
	I1001 12:40:15.319725    5349 start.go:128] duration metric: took 2.327458s to createHost
	I1001 12:40:15.319812    5349 start.go:83] releasing machines lock for "custom-flannel-298000", held for 2.32768525s
	W1001 12:40:15.320049    5349 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-298000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-298000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:40:15.332712    5349 out.go:201] 
	W1001 12:40:15.335787    5349 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 12:40:15.335808    5349 out.go:270] * 
	* 
	W1001 12:40:15.337489    5349 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 12:40:15.350594    5349 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-298000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-298000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.78781425s)

                                                
                                                
-- stdout --
	* [false-298000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-298000" primary control-plane node in "false-298000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-298000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:40:17.749267    5473 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:40:17.749414    5473 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:40:17.749418    5473 out.go:358] Setting ErrFile to fd 2...
	I1001 12:40:17.749420    5473 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:40:17.749543    5473 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:40:17.750593    5473 out.go:352] Setting JSON to false
	I1001 12:40:17.766859    5473 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4182,"bootTime":1727807435,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1001 12:40:17.766920    5473 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 12:40:17.774922    5473 out.go:177] * [false-298000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 12:40:17.781702    5473 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 12:40:17.781778    5473 notify.go:220] Checking for updates...
	I1001 12:40:17.789668    5473 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	I1001 12:40:17.792698    5473 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 12:40:17.795726    5473 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 12:40:17.798713    5473 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	I1001 12:40:17.801644    5473 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 12:40:17.805075    5473 config.go:182] Loaded profile config "multinode-301000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:40:17.805138    5473 config.go:182] Loaded profile config "stopped-upgrade-340000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1001 12:40:17.805182    5473 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 12:40:17.809645    5473 out.go:177] * Using the qemu2 driver based on user configuration
	I1001 12:40:17.816741    5473 start.go:297] selected driver: qemu2
	I1001 12:40:17.816746    5473 start.go:901] validating driver "qemu2" against <nil>
	I1001 12:40:17.816752    5473 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 12:40:17.818760    5473 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 12:40:17.821690    5473 out.go:177] * Automatically selected the socket_vmnet network
	I1001 12:40:17.824762    5473 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 12:40:17.824782    5473 cni.go:84] Creating CNI manager for "false"
	I1001 12:40:17.824804    5473 start.go:340] cluster config:
	{Name:false-298000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:false-298000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet
_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 12:40:17.828089    5473 iso.go:125] acquiring lock: {Name:mk749d3a5db31c259cbd6465e91cf5073e7cc750 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:40:17.835707    5473 out.go:177] * Starting "false-298000" primary control-plane node in "false-298000" cluster
	I1001 12:40:17.839739    5473 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 12:40:17.839753    5473 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1001 12:40:17.839766    5473 cache.go:56] Caching tarball of preloaded images
	I1001 12:40:17.839833    5473 preload.go:172] Found /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 12:40:17.839839    5473 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1001 12:40:17.839903    5473 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/false-298000/config.json ...
	I1001 12:40:17.839913    5473 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/false-298000/config.json: {Name:mked4393d72bab5e379f8109fff6aefc33f70e1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 12:40:17.840117    5473 start.go:360] acquireMachinesLock for false-298000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:40:17.840146    5473 start.go:364] duration metric: took 23.875µs to acquireMachinesLock for "false-298000"
	I1001 12:40:17.840157    5473 start.go:93] Provisioning new machine with config: &{Name:false-298000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:false-298000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 12:40:17.840185    5473 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 12:40:17.848716    5473 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1001 12:40:17.864524    5473 start.go:159] libmachine.API.Create for "false-298000" (driver="qemu2")
	I1001 12:40:17.864560    5473 client.go:168] LocalClient.Create starting
	I1001 12:40:17.864642    5473 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem
	I1001 12:40:17.864673    5473 main.go:141] libmachine: Decoding PEM data...
	I1001 12:40:17.864682    5473 main.go:141] libmachine: Parsing certificate...
	I1001 12:40:17.864725    5473 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/cert.pem
	I1001 12:40:17.864748    5473 main.go:141] libmachine: Decoding PEM data...
	I1001 12:40:17.864757    5473 main.go:141] libmachine: Parsing certificate...
	I1001 12:40:17.865176    5473 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19736-1073/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 12:40:18.024798    5473 main.go:141] libmachine: Creating SSH key...
	I1001 12:40:18.098784    5473 main.go:141] libmachine: Creating Disk image...
	I1001 12:40:18.098791    5473 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 12:40:18.098990    5473 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/false-298000/disk.qcow2.raw /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/false-298000/disk.qcow2
	I1001 12:40:18.107987    5473 main.go:141] libmachine: STDOUT: 
	I1001 12:40:18.108002    5473 main.go:141] libmachine: STDERR: 
	I1001 12:40:18.108084    5473 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/false-298000/disk.qcow2 +20000M
	I1001 12:40:18.115993    5473 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 12:40:18.116016    5473 main.go:141] libmachine: STDERR: 
	I1001 12:40:18.116029    5473 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/false-298000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/false-298000/disk.qcow2
	I1001 12:40:18.116034    5473 main.go:141] libmachine: Starting QEMU VM...
	I1001 12:40:18.116043    5473 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:40:18.116072    5473 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/false-298000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/false-298000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/false-298000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:98:cb:37:75:a9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/false-298000/disk.qcow2
	I1001 12:40:18.117803    5473 main.go:141] libmachine: STDOUT: 
	I1001 12:40:18.117818    5473 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:40:18.117840    5473 client.go:171] duration metric: took 253.274959ms to LocalClient.Create
	I1001 12:40:20.120090    5473 start.go:128] duration metric: took 2.279940458s to createHost
	I1001 12:40:20.120163    5473 start.go:83] releasing machines lock for "false-298000", held for 2.280066209s
	W1001 12:40:20.120235    5473 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:40:20.138468    5473 out.go:177] * Deleting "false-298000" in qemu2 ...
	W1001 12:40:20.171130    5473 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:40:20.171161    5473 start.go:729] Will try again in 5 seconds ...
	I1001 12:40:25.173251    5473 start.go:360] acquireMachinesLock for false-298000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:40:25.173682    5473 start.go:364] duration metric: took 338.375µs to acquireMachinesLock for "false-298000"
	I1001 12:40:25.173783    5473 start.go:93] Provisioning new machine with config: &{Name:false-298000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:false-298000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 12:40:25.173940    5473 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 12:40:25.180632    5473 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1001 12:40:25.215704    5473 start.go:159] libmachine.API.Create for "false-298000" (driver="qemu2")
	I1001 12:40:25.215772    5473 client.go:168] LocalClient.Create starting
	I1001 12:40:25.215871    5473 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem
	I1001 12:40:25.215937    5473 main.go:141] libmachine: Decoding PEM data...
	I1001 12:40:25.215953    5473 main.go:141] libmachine: Parsing certificate...
	I1001 12:40:25.216010    5473 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/cert.pem
	I1001 12:40:25.216044    5473 main.go:141] libmachine: Decoding PEM data...
	I1001 12:40:25.216054    5473 main.go:141] libmachine: Parsing certificate...
	I1001 12:40:25.216658    5473 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19736-1073/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 12:40:25.381822    5473 main.go:141] libmachine: Creating SSH key...
	I1001 12:40:25.446343    5473 main.go:141] libmachine: Creating Disk image...
	I1001 12:40:25.446349    5473 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 12:40:25.446556    5473 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/false-298000/disk.qcow2.raw /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/false-298000/disk.qcow2
	I1001 12:40:25.456217    5473 main.go:141] libmachine: STDOUT: 
	I1001 12:40:25.456244    5473 main.go:141] libmachine: STDERR: 
	I1001 12:40:25.456308    5473 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/false-298000/disk.qcow2 +20000M
	I1001 12:40:25.464486    5473 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 12:40:25.464509    5473 main.go:141] libmachine: STDERR: 
	I1001 12:40:25.464519    5473 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/false-298000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/false-298000/disk.qcow2
	I1001 12:40:25.464523    5473 main.go:141] libmachine: Starting QEMU VM...
	I1001 12:40:25.464532    5473 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:40:25.464562    5473 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/false-298000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/false-298000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/false-298000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:10:20:25:e6:b7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/false-298000/disk.qcow2
	I1001 12:40:25.466237    5473 main.go:141] libmachine: STDOUT: 
	I1001 12:40:25.466253    5473 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:40:25.466265    5473 client.go:171] duration metric: took 250.49425ms to LocalClient.Create
	I1001 12:40:27.468305    5473 start.go:128] duration metric: took 2.294406125s to createHost
	I1001 12:40:27.468343    5473 start.go:83] releasing machines lock for "false-298000", held for 2.294702333s
	W1001 12:40:27.468546    5473 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-298000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-298000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:40:27.477889    5473 out.go:201] 
	W1001 12:40:27.485971    5473 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 12:40:27.485998    5473 out.go:270] * 
	* 
	W1001 12:40:27.487074    5473 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 12:40:27.498896    5473 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-298000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-298000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.826709416s)

                                                
                                                
-- stdout --
	* [enable-default-cni-298000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-298000" primary control-plane node in "enable-default-cni-298000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-298000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:40:29.724895    5586 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:40:29.725046    5586 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:40:29.725050    5586 out.go:358] Setting ErrFile to fd 2...
	I1001 12:40:29.725053    5586 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:40:29.725185    5586 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:40:29.726345    5586 out.go:352] Setting JSON to false
	I1001 12:40:29.742747    5586 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4194,"bootTime":1727807435,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1001 12:40:29.742827    5586 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 12:40:29.747697    5586 out.go:177] * [enable-default-cni-298000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 12:40:29.756545    5586 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 12:40:29.756601    5586 notify.go:220] Checking for updates...
	I1001 12:40:29.764408    5586 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	I1001 12:40:29.767459    5586 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 12:40:29.770473    5586 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 12:40:29.773426    5586 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	I1001 12:40:29.776448    5586 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 12:40:29.779888    5586 config.go:182] Loaded profile config "multinode-301000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:40:29.779955    5586 config.go:182] Loaded profile config "stopped-upgrade-340000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1001 12:40:29.780011    5586 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 12:40:29.784504    5586 out.go:177] * Using the qemu2 driver based on user configuration
	I1001 12:40:29.791524    5586 start.go:297] selected driver: qemu2
	I1001 12:40:29.791530    5586 start.go:901] validating driver "qemu2" against <nil>
	I1001 12:40:29.791537    5586 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 12:40:29.793631    5586 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 12:40:29.796458    5586 out.go:177] * Automatically selected the socket_vmnet network
	E1001 12:40:29.799484    5586 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1001 12:40:29.799496    5586 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 12:40:29.799512    5586 cni.go:84] Creating CNI manager for "bridge"
	I1001 12:40:29.799517    5586 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1001 12:40:29.799553    5586 start.go:340] cluster config:
	{Name:enable-default-cni-298000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-298000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt
/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 12:40:29.803016    5586 iso.go:125] acquiring lock: {Name:mk749d3a5db31c259cbd6465e91cf5073e7cc750 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:40:29.806580    5586 out.go:177] * Starting "enable-default-cni-298000" primary control-plane node in "enable-default-cni-298000" cluster
	I1001 12:40:29.814472    5586 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 12:40:29.814485    5586 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1001 12:40:29.814491    5586 cache.go:56] Caching tarball of preloaded images
	I1001 12:40:29.814539    5586 preload.go:172] Found /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 12:40:29.814544    5586 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1001 12:40:29.814602    5586 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/enable-default-cni-298000/config.json ...
	I1001 12:40:29.814612    5586 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/enable-default-cni-298000/config.json: {Name:mk2d31bc614d342e12c5b084a9d8f8ba7e377c8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 12:40:29.814819    5586 start.go:360] acquireMachinesLock for enable-default-cni-298000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:40:29.814853    5586 start.go:364] duration metric: took 25.292µs to acquireMachinesLock for "enable-default-cni-298000"
	I1001 12:40:29.814865    5586 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-298000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-298000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 12:40:29.814894    5586 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 12:40:29.822474    5586 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1001 12:40:29.837689    5586 start.go:159] libmachine.API.Create for "enable-default-cni-298000" (driver="qemu2")
	I1001 12:40:29.837721    5586 client.go:168] LocalClient.Create starting
	I1001 12:40:29.837786    5586 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem
	I1001 12:40:29.837835    5586 main.go:141] libmachine: Decoding PEM data...
	I1001 12:40:29.837846    5586 main.go:141] libmachine: Parsing certificate...
	I1001 12:40:29.837871    5586 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/cert.pem
	I1001 12:40:29.837894    5586 main.go:141] libmachine: Decoding PEM data...
	I1001 12:40:29.837900    5586 main.go:141] libmachine: Parsing certificate...
	I1001 12:40:29.838273    5586 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19736-1073/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 12:40:29.998531    5586 main.go:141] libmachine: Creating SSH key...
	I1001 12:40:30.050245    5586 main.go:141] libmachine: Creating Disk image...
	I1001 12:40:30.050253    5586 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 12:40:30.050446    5586 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/enable-default-cni-298000/disk.qcow2.raw /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/enable-default-cni-298000/disk.qcow2
	I1001 12:40:30.059624    5586 main.go:141] libmachine: STDOUT: 
	I1001 12:40:30.059642    5586 main.go:141] libmachine: STDERR: 
	I1001 12:40:30.059700    5586 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/enable-default-cni-298000/disk.qcow2 +20000M
	I1001 12:40:30.067758    5586 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 12:40:30.067775    5586 main.go:141] libmachine: STDERR: 
	I1001 12:40:30.067797    5586 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/enable-default-cni-298000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/enable-default-cni-298000/disk.qcow2
	I1001 12:40:30.067807    5586 main.go:141] libmachine: Starting QEMU VM...
	I1001 12:40:30.067826    5586 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:40:30.067853    5586 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/enable-default-cni-298000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/enable-default-cni-298000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/enable-default-cni-298000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:c3:b9:c3:a8:f2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/enable-default-cni-298000/disk.qcow2
	I1001 12:40:30.069562    5586 main.go:141] libmachine: STDOUT: 
	I1001 12:40:30.069578    5586 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:40:30.069598    5586 client.go:171] duration metric: took 231.872875ms to LocalClient.Create
	I1001 12:40:32.071854    5586 start.go:128] duration metric: took 2.256974458s to createHost
	I1001 12:40:32.071953    5586 start.go:83] releasing machines lock for "enable-default-cni-298000", held for 2.257148958s
	W1001 12:40:32.072023    5586 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:40:32.086309    5586 out.go:177] * Deleting "enable-default-cni-298000" in qemu2 ...
	W1001 12:40:32.122376    5586 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:40:32.122405    5586 start.go:729] Will try again in 5 seconds ...
	I1001 12:40:37.123740    5586 start.go:360] acquireMachinesLock for enable-default-cni-298000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:40:37.123984    5586 start.go:364] duration metric: took 203.5µs to acquireMachinesLock for "enable-default-cni-298000"
	I1001 12:40:37.124015    5586 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-298000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-298000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 12:40:37.124142    5586 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 12:40:37.136494    5586 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1001 12:40:37.167015    5586 start.go:159] libmachine.API.Create for "enable-default-cni-298000" (driver="qemu2")
	I1001 12:40:37.167068    5586 client.go:168] LocalClient.Create starting
	I1001 12:40:37.167176    5586 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem
	I1001 12:40:37.167238    5586 main.go:141] libmachine: Decoding PEM data...
	I1001 12:40:37.167252    5586 main.go:141] libmachine: Parsing certificate...
	I1001 12:40:37.167304    5586 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/cert.pem
	I1001 12:40:37.167342    5586 main.go:141] libmachine: Decoding PEM data...
	I1001 12:40:37.167356    5586 main.go:141] libmachine: Parsing certificate...
	I1001 12:40:37.168047    5586 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19736-1073/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 12:40:37.341603    5586 main.go:141] libmachine: Creating SSH key...
	I1001 12:40:37.449852    5586 main.go:141] libmachine: Creating Disk image...
	I1001 12:40:37.449861    5586 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 12:40:37.450078    5586 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/enable-default-cni-298000/disk.qcow2.raw /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/enable-default-cni-298000/disk.qcow2
	I1001 12:40:37.459577    5586 main.go:141] libmachine: STDOUT: 
	I1001 12:40:37.459600    5586 main.go:141] libmachine: STDERR: 
	I1001 12:40:37.459652    5586 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/enable-default-cni-298000/disk.qcow2 +20000M
	I1001 12:40:37.467536    5586 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 12:40:37.467558    5586 main.go:141] libmachine: STDERR: 
	I1001 12:40:37.467569    5586 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/enable-default-cni-298000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/enable-default-cni-298000/disk.qcow2
	I1001 12:40:37.467574    5586 main.go:141] libmachine: Starting QEMU VM...
	I1001 12:40:37.467587    5586 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:40:37.467631    5586 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/enable-default-cni-298000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/enable-default-cni-298000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/enable-default-cni-298000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:37:af:24:3e:b3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/enable-default-cni-298000/disk.qcow2
	I1001 12:40:37.469331    5586 main.go:141] libmachine: STDOUT: 
	I1001 12:40:37.469351    5586 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:40:37.469364    5586 client.go:171] duration metric: took 302.299417ms to LocalClient.Create
	I1001 12:40:39.471499    5586 start.go:128] duration metric: took 2.347391s to createHost
	I1001 12:40:39.471604    5586 start.go:83] releasing machines lock for "enable-default-cni-298000", held for 2.347663292s
	W1001 12:40:39.472051    5586 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-298000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-298000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:40:39.494819    5586 out.go:201] 
	W1001 12:40:39.498882    5586 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 12:40:39.498916    5586 out.go:270] * 
	* 
	W1001 12:40:39.501379    5586 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 12:40:39.514721    5586 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-298000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
E1001 12:40:42.720849    1595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/addons-075000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-298000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.957680792s)

                                                
                                                
-- stdout --
	* [flannel-298000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-298000" primary control-plane node in "flannel-298000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-298000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:40:41.735543    5704 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:40:41.735691    5704 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:40:41.735694    5704 out.go:358] Setting ErrFile to fd 2...
	I1001 12:40:41.735697    5704 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:40:41.735846    5704 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:40:41.736908    5704 out.go:352] Setting JSON to false
	I1001 12:40:41.753866    5704 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4206,"bootTime":1727807435,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1001 12:40:41.753987    5704 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 12:40:41.760502    5704 out.go:177] * [flannel-298000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 12:40:41.768472    5704 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 12:40:41.768523    5704 notify.go:220] Checking for updates...
	I1001 12:40:41.773409    5704 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	I1001 12:40:41.776407    5704 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 12:40:41.779503    5704 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 12:40:41.782469    5704 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	I1001 12:40:41.785386    5704 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 12:40:41.788803    5704 config.go:182] Loaded profile config "multinode-301000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:40:41.788875    5704 config.go:182] Loaded profile config "stopped-upgrade-340000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1001 12:40:41.788923    5704 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 12:40:41.793310    5704 out.go:177] * Using the qemu2 driver based on user configuration
	I1001 12:40:41.800444    5704 start.go:297] selected driver: qemu2
	I1001 12:40:41.800450    5704 start.go:901] validating driver "qemu2" against <nil>
	I1001 12:40:41.800456    5704 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 12:40:41.802823    5704 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 12:40:41.806457    5704 out.go:177] * Automatically selected the socket_vmnet network
	I1001 12:40:41.809460    5704 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 12:40:41.809481    5704 cni.go:84] Creating CNI manager for "flannel"
	I1001 12:40:41.809484    5704 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I1001 12:40:41.809510    5704 start.go:340] cluster config:
	{Name:flannel-298000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:flannel-298000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/soc
ket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 12:40:41.813268    5704 iso.go:125] acquiring lock: {Name:mk749d3a5db31c259cbd6465e91cf5073e7cc750 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:40:41.818390    5704 out.go:177] * Starting "flannel-298000" primary control-plane node in "flannel-298000" cluster
	I1001 12:40:41.822326    5704 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 12:40:41.822341    5704 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1001 12:40:41.822360    5704 cache.go:56] Caching tarball of preloaded images
	I1001 12:40:41.822430    5704 preload.go:172] Found /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 12:40:41.822437    5704 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1001 12:40:41.822500    5704 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/flannel-298000/config.json ...
	I1001 12:40:41.822512    5704 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/flannel-298000/config.json: {Name:mk129abcd4faca7c72853fa884c39a2a6cd851b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 12:40:41.822737    5704 start.go:360] acquireMachinesLock for flannel-298000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:40:41.822774    5704 start.go:364] duration metric: took 30.416µs to acquireMachinesLock for "flannel-298000"
	I1001 12:40:41.822789    5704 start.go:93] Provisioning new machine with config: &{Name:flannel-298000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:flannel-298000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 12:40:41.822820    5704 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 12:40:41.831376    5704 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1001 12:40:41.849221    5704 start.go:159] libmachine.API.Create for "flannel-298000" (driver="qemu2")
	I1001 12:40:41.849256    5704 client.go:168] LocalClient.Create starting
	I1001 12:40:41.849343    5704 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem
	I1001 12:40:41.849376    5704 main.go:141] libmachine: Decoding PEM data...
	I1001 12:40:41.849385    5704 main.go:141] libmachine: Parsing certificate...
	I1001 12:40:41.849431    5704 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/cert.pem
	I1001 12:40:41.849456    5704 main.go:141] libmachine: Decoding PEM data...
	I1001 12:40:41.849465    5704 main.go:141] libmachine: Parsing certificate...
	I1001 12:40:41.849839    5704 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19736-1073/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 12:40:42.009345    5704 main.go:141] libmachine: Creating SSH key...
	I1001 12:40:42.136602    5704 main.go:141] libmachine: Creating Disk image...
	I1001 12:40:42.136611    5704 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 12:40:42.136816    5704 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/flannel-298000/disk.qcow2.raw /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/flannel-298000/disk.qcow2
	I1001 12:40:42.146003    5704 main.go:141] libmachine: STDOUT: 
	I1001 12:40:42.146019    5704 main.go:141] libmachine: STDERR: 
	I1001 12:40:42.146090    5704 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/flannel-298000/disk.qcow2 +20000M
	I1001 12:40:42.153946    5704 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 12:40:42.153974    5704 main.go:141] libmachine: STDERR: 
	I1001 12:40:42.154040    5704 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/flannel-298000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/flannel-298000/disk.qcow2
	I1001 12:40:42.154048    5704 main.go:141] libmachine: Starting QEMU VM...
	I1001 12:40:42.154058    5704 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:40:42.154088    5704 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/flannel-298000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/flannel-298000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/flannel-298000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:1f:64:ec:f0:5c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/flannel-298000/disk.qcow2
	I1001 12:40:42.155794    5704 main.go:141] libmachine: STDOUT: 
	I1001 12:40:42.155811    5704 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:40:42.155836    5704 client.go:171] duration metric: took 306.582541ms to LocalClient.Create
	I1001 12:40:44.158075    5704 start.go:128] duration metric: took 2.335280875s to createHost
	I1001 12:40:44.158177    5704 start.go:83] releasing machines lock for "flannel-298000", held for 2.335452041s
	W1001 12:40:44.158235    5704 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:40:44.170035    5704 out.go:177] * Deleting "flannel-298000" in qemu2 ...
	W1001 12:40:44.197669    5704 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:40:44.197690    5704 start.go:729] Will try again in 5 seconds ...
	I1001 12:40:49.197775    5704 start.go:360] acquireMachinesLock for flannel-298000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:40:49.198106    5704 start.go:364] duration metric: took 265.75µs to acquireMachinesLock for "flannel-298000"
	I1001 12:40:49.198185    5704 start.go:93] Provisioning new machine with config: &{Name:flannel-298000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:flannel-298000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 12:40:49.198319    5704 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 12:40:49.205891    5704 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1001 12:40:49.240502    5704 start.go:159] libmachine.API.Create for "flannel-298000" (driver="qemu2")
	I1001 12:40:49.240549    5704 client.go:168] LocalClient.Create starting
	I1001 12:40:49.240652    5704 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem
	I1001 12:40:49.240713    5704 main.go:141] libmachine: Decoding PEM data...
	I1001 12:40:49.240730    5704 main.go:141] libmachine: Parsing certificate...
	I1001 12:40:49.240800    5704 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/cert.pem
	I1001 12:40:49.240839    5704 main.go:141] libmachine: Decoding PEM data...
	I1001 12:40:49.240848    5704 main.go:141] libmachine: Parsing certificate...
	I1001 12:40:49.241468    5704 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19736-1073/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 12:40:49.408841    5704 main.go:141] libmachine: Creating SSH key...
	I1001 12:40:49.599244    5704 main.go:141] libmachine: Creating Disk image...
	I1001 12:40:49.599260    5704 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 12:40:49.599447    5704 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/flannel-298000/disk.qcow2.raw /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/flannel-298000/disk.qcow2
	I1001 12:40:49.608467    5704 main.go:141] libmachine: STDOUT: 
	I1001 12:40:49.608490    5704 main.go:141] libmachine: STDERR: 
	I1001 12:40:49.608541    5704 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/flannel-298000/disk.qcow2 +20000M
	I1001 12:40:49.616474    5704 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 12:40:49.616493    5704 main.go:141] libmachine: STDERR: 
	I1001 12:40:49.616503    5704 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/flannel-298000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/flannel-298000/disk.qcow2
	I1001 12:40:49.616509    5704 main.go:141] libmachine: Starting QEMU VM...
	I1001 12:40:49.616516    5704 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:40:49.616549    5704 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/flannel-298000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/flannel-298000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/flannel-298000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:f8:aa:9b:c5:93 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/flannel-298000/disk.qcow2
	I1001 12:40:49.618199    5704 main.go:141] libmachine: STDOUT: 
	I1001 12:40:49.618219    5704 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:40:49.618234    5704 client.go:171] duration metric: took 377.689208ms to LocalClient.Create
	I1001 12:40:51.620392    5704 start.go:128] duration metric: took 2.422094792s to createHost
	I1001 12:40:51.620473    5704 start.go:83] releasing machines lock for "flannel-298000", held for 2.422411958s
	W1001 12:40:51.620947    5704 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-298000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-298000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:40:51.634638    5704 out.go:201] 
	W1001 12:40:51.638393    5704 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 12:40:51.638414    5704 out.go:270] * 
	* 
	W1001 12:40:51.640234    5704 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 12:40:51.649587    5704 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-298000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-298000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.83392375s)

                                                
                                                
-- stdout --
	* [bridge-298000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-298000" primary control-plane node in "bridge-298000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-298000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:40:54.094796    5829 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:40:54.094938    5829 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:40:54.094944    5829 out.go:358] Setting ErrFile to fd 2...
	I1001 12:40:54.094946    5829 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:40:54.095076    5829 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:40:54.096251    5829 out.go:352] Setting JSON to false
	I1001 12:40:54.112793    5829 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4219,"bootTime":1727807435,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1001 12:40:54.112874    5829 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 12:40:54.119307    5829 out.go:177] * [bridge-298000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 12:40:54.126442    5829 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 12:40:54.126505    5829 notify.go:220] Checking for updates...
	I1001 12:40:54.134382    5829 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	I1001 12:40:54.137406    5829 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 12:40:54.140394    5829 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 12:40:54.145411    5829 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	I1001 12:40:54.152339    5829 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 12:40:54.155718    5829 config.go:182] Loaded profile config "multinode-301000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:40:54.155782    5829 config.go:182] Loaded profile config "stopped-upgrade-340000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1001 12:40:54.155832    5829 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 12:40:54.159392    5829 out.go:177] * Using the qemu2 driver based on user configuration
	I1001 12:40:54.166350    5829 start.go:297] selected driver: qemu2
	I1001 12:40:54.166355    5829 start.go:901] validating driver "qemu2" against <nil>
	I1001 12:40:54.166360    5829 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 12:40:54.168388    5829 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 12:40:54.169624    5829 out.go:177] * Automatically selected the socket_vmnet network
	I1001 12:40:54.172429    5829 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 12:40:54.172443    5829 cni.go:84] Creating CNI manager for "bridge"
	I1001 12:40:54.172451    5829 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1001 12:40:54.172479    5829 start.go:340] cluster config:
	{Name:bridge-298000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:bridge-298000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 12:40:54.175739    5829 iso.go:125] acquiring lock: {Name:mk749d3a5db31c259cbd6465e91cf5073e7cc750 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:40:54.182399    5829 out.go:177] * Starting "bridge-298000" primary control-plane node in "bridge-298000" cluster
	I1001 12:40:54.186341    5829 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 12:40:54.186365    5829 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1001 12:40:54.186376    5829 cache.go:56] Caching tarball of preloaded images
	I1001 12:40:54.186438    5829 preload.go:172] Found /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 12:40:54.186443    5829 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1001 12:40:54.186495    5829 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/bridge-298000/config.json ...
	I1001 12:40:54.186504    5829 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/bridge-298000/config.json: {Name:mk601354dbf91c67771a5566a5834813912534b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 12:40:54.186726    5829 start.go:360] acquireMachinesLock for bridge-298000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:40:54.186758    5829 start.go:364] duration metric: took 25.5µs to acquireMachinesLock for "bridge-298000"
	I1001 12:40:54.186768    5829 start.go:93] Provisioning new machine with config: &{Name:bridge-298000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:bridge-298000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 12:40:54.186793    5829 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 12:40:54.195366    5829 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1001 12:40:54.210663    5829 start.go:159] libmachine.API.Create for "bridge-298000" (driver="qemu2")
	I1001 12:40:54.210691    5829 client.go:168] LocalClient.Create starting
	I1001 12:40:54.210756    5829 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem
	I1001 12:40:54.210788    5829 main.go:141] libmachine: Decoding PEM data...
	I1001 12:40:54.210798    5829 main.go:141] libmachine: Parsing certificate...
	I1001 12:40:54.210849    5829 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/cert.pem
	I1001 12:40:54.210874    5829 main.go:141] libmachine: Decoding PEM data...
	I1001 12:40:54.210882    5829 main.go:141] libmachine: Parsing certificate...
	I1001 12:40:54.211242    5829 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19736-1073/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 12:40:54.372538    5829 main.go:141] libmachine: Creating SSH key...
	I1001 12:40:54.446139    5829 main.go:141] libmachine: Creating Disk image...
	I1001 12:40:54.446151    5829 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 12:40:54.446336    5829 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/bridge-298000/disk.qcow2.raw /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/bridge-298000/disk.qcow2
	I1001 12:40:54.455596    5829 main.go:141] libmachine: STDOUT: 
	I1001 12:40:54.455768    5829 main.go:141] libmachine: STDERR: 
	I1001 12:40:54.455823    5829 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/bridge-298000/disk.qcow2 +20000M
	I1001 12:40:54.463681    5829 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 12:40:54.463695    5829 main.go:141] libmachine: STDERR: 
	I1001 12:40:54.463712    5829 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/bridge-298000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/bridge-298000/disk.qcow2
	I1001 12:40:54.463715    5829 main.go:141] libmachine: Starting QEMU VM...
	I1001 12:40:54.463731    5829 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:40:54.463762    5829 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/bridge-298000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/bridge-298000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/bridge-298000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:3b:b3:e6:fa:18 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/bridge-298000/disk.qcow2
	I1001 12:40:54.465486    5829 main.go:141] libmachine: STDOUT: 
	I1001 12:40:54.465498    5829 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:40:54.465520    5829 client.go:171] duration metric: took 254.826167ms to LocalClient.Create
	I1001 12:40:56.467774    5829 start.go:128] duration metric: took 2.281001208s to createHost
	I1001 12:40:56.467872    5829 start.go:83] releasing machines lock for "bridge-298000", held for 2.281163s
	W1001 12:40:56.467948    5829 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:40:56.485113    5829 out.go:177] * Deleting "bridge-298000" in qemu2 ...
	W1001 12:40:56.518076    5829 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:40:56.518106    5829 start.go:729] Will try again in 5 seconds ...
	I1001 12:41:01.520142    5829 start.go:360] acquireMachinesLock for bridge-298000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:41:01.520620    5829 start.go:364] duration metric: took 395.834µs to acquireMachinesLock for "bridge-298000"
	I1001 12:41:01.520671    5829 start.go:93] Provisioning new machine with config: &{Name:bridge-298000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:bridge-298000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 12:41:01.520860    5829 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 12:41:01.530783    5829 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1001 12:41:01.573603    5829 start.go:159] libmachine.API.Create for "bridge-298000" (driver="qemu2")
	I1001 12:41:01.573655    5829 client.go:168] LocalClient.Create starting
	I1001 12:41:01.573768    5829 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem
	I1001 12:41:01.573833    5829 main.go:141] libmachine: Decoding PEM data...
	I1001 12:41:01.573846    5829 main.go:141] libmachine: Parsing certificate...
	I1001 12:41:01.573899    5829 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/cert.pem
	I1001 12:41:01.573939    5829 main.go:141] libmachine: Decoding PEM data...
	I1001 12:41:01.573955    5829 main.go:141] libmachine: Parsing certificate...
	I1001 12:41:01.574545    5829 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19736-1073/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 12:41:01.739348    5829 main.go:141] libmachine: Creating SSH key...
	I1001 12:41:01.823590    5829 main.go:141] libmachine: Creating Disk image...
	I1001 12:41:01.823597    5829 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 12:41:01.823829    5829 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/bridge-298000/disk.qcow2.raw /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/bridge-298000/disk.qcow2
	I1001 12:41:01.832994    5829 main.go:141] libmachine: STDOUT: 
	I1001 12:41:01.833015    5829 main.go:141] libmachine: STDERR: 
	I1001 12:41:01.833082    5829 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/bridge-298000/disk.qcow2 +20000M
	I1001 12:41:01.841233    5829 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 12:41:01.841249    5829 main.go:141] libmachine: STDERR: 
	I1001 12:41:01.841266    5829 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/bridge-298000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/bridge-298000/disk.qcow2
	I1001 12:41:01.841271    5829 main.go:141] libmachine: Starting QEMU VM...
	I1001 12:41:01.841279    5829 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:41:01.841308    5829 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/bridge-298000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/bridge-298000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/bridge-298000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:fd:f0:79:65:45 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/bridge-298000/disk.qcow2
	I1001 12:41:01.843001    5829 main.go:141] libmachine: STDOUT: 
	I1001 12:41:01.843018    5829 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:41:01.843038    5829 client.go:171] duration metric: took 269.384042ms to LocalClient.Create
	I1001 12:41:03.845202    5829 start.go:128] duration metric: took 2.324357667s to createHost
	I1001 12:41:03.845298    5829 start.go:83] releasing machines lock for "bridge-298000", held for 2.324720334s
	W1001 12:41:03.845618    5829 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-298000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-298000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:41:03.865441    5829 out.go:201] 
	W1001 12:41:03.869397    5829 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 12:41:03.869424    5829 out.go:270] * 
	* 
	W1001 12:41:03.871929    5829 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 12:41:03.886307    5829 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-298000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-298000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.92032425s)

                                                
                                                
-- stdout --
	* [kubenet-298000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-298000" primary control-plane node in "kubenet-298000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-298000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:41:06.096453    5949 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:41:06.096611    5949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:41:06.096615    5949 out.go:358] Setting ErrFile to fd 2...
	I1001 12:41:06.096617    5949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:41:06.096760    5949 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:41:06.097992    5949 out.go:352] Setting JSON to false
	I1001 12:41:06.114400    5949 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4231,"bootTime":1727807435,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1001 12:41:06.114495    5949 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 12:41:06.120805    5949 out.go:177] * [kubenet-298000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 12:41:06.127658    5949 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 12:41:06.127710    5949 notify.go:220] Checking for updates...
	I1001 12:41:06.134581    5949 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	I1001 12:41:06.137618    5949 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 12:41:06.144636    5949 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 12:41:06.151685    5949 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	I1001 12:41:06.157519    5949 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 12:41:06.162007    5949 config.go:182] Loaded profile config "multinode-301000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:41:06.162084    5949 config.go:182] Loaded profile config "stopped-upgrade-340000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1001 12:41:06.162130    5949 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 12:41:06.165628    5949 out.go:177] * Using the qemu2 driver based on user configuration
	I1001 12:41:06.172646    5949 start.go:297] selected driver: qemu2
	I1001 12:41:06.172651    5949 start.go:901] validating driver "qemu2" against <nil>
	I1001 12:41:06.172657    5949 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 12:41:06.174713    5949 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 12:41:06.178491    5949 out.go:177] * Automatically selected the socket_vmnet network
	I1001 12:41:06.182726    5949 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 12:41:06.182742    5949 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1001 12:41:06.182770    5949 start.go:340] cluster config:
	{Name:kubenet-298000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubenet-298000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 12:41:06.186331    5949 iso.go:125] acquiring lock: {Name:mk749d3a5db31c259cbd6465e91cf5073e7cc750 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:41:06.193613    5949 out.go:177] * Starting "kubenet-298000" primary control-plane node in "kubenet-298000" cluster
	I1001 12:41:06.197628    5949 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 12:41:06.197647    5949 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1001 12:41:06.197653    5949 cache.go:56] Caching tarball of preloaded images
	I1001 12:41:06.197708    5949 preload.go:172] Found /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 12:41:06.197713    5949 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1001 12:41:06.197777    5949 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/kubenet-298000/config.json ...
	I1001 12:41:06.197787    5949 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/kubenet-298000/config.json: {Name:mkd3fc4ed222b123609856e3983076be88bf66a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 12:41:06.198033    5949 start.go:360] acquireMachinesLock for kubenet-298000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:41:06.198072    5949 start.go:364] duration metric: took 33µs to acquireMachinesLock for "kubenet-298000"
	I1001 12:41:06.198096    5949 start.go:93] Provisioning new machine with config: &{Name:kubenet-298000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:kubenet-298000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 12:41:06.198125    5949 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 12:41:06.206597    5949 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1001 12:41:06.221888    5949 start.go:159] libmachine.API.Create for "kubenet-298000" (driver="qemu2")
	I1001 12:41:06.221913    5949 client.go:168] LocalClient.Create starting
	I1001 12:41:06.221986    5949 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem
	I1001 12:41:06.222018    5949 main.go:141] libmachine: Decoding PEM data...
	I1001 12:41:06.222031    5949 main.go:141] libmachine: Parsing certificate...
	I1001 12:41:06.222073    5949 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/cert.pem
	I1001 12:41:06.222099    5949 main.go:141] libmachine: Decoding PEM data...
	I1001 12:41:06.222108    5949 main.go:141] libmachine: Parsing certificate...
	I1001 12:41:06.222460    5949 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19736-1073/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 12:41:06.376798    5949 main.go:141] libmachine: Creating SSH key...
	I1001 12:41:06.455298    5949 main.go:141] libmachine: Creating Disk image...
	I1001 12:41:06.455307    5949 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 12:41:06.455537    5949 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kubenet-298000/disk.qcow2.raw /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kubenet-298000/disk.qcow2
	I1001 12:41:06.464895    5949 main.go:141] libmachine: STDOUT: 
	I1001 12:41:06.464912    5949 main.go:141] libmachine: STDERR: 
	I1001 12:41:06.464974    5949 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kubenet-298000/disk.qcow2 +20000M
	I1001 12:41:06.472933    5949 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 12:41:06.472950    5949 main.go:141] libmachine: STDERR: 
	I1001 12:41:06.472966    5949 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kubenet-298000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kubenet-298000/disk.qcow2
	I1001 12:41:06.472971    5949 main.go:141] libmachine: Starting QEMU VM...
	I1001 12:41:06.472985    5949 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:41:06.473013    5949 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kubenet-298000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kubenet-298000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kubenet-298000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:d8:8e:9c:b3:1c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kubenet-298000/disk.qcow2
	I1001 12:41:06.474587    5949 main.go:141] libmachine: STDOUT: 
	I1001 12:41:06.474603    5949 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:41:06.474622    5949 client.go:171] duration metric: took 252.708917ms to LocalClient.Create
	I1001 12:41:08.476846    5949 start.go:128] duration metric: took 2.278745875s to createHost
	I1001 12:41:08.476967    5949 start.go:83] releasing machines lock for "kubenet-298000", held for 2.278939708s
	W1001 12:41:08.477051    5949 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:41:08.488220    5949 out.go:177] * Deleting "kubenet-298000" in qemu2 ...
	W1001 12:41:08.524301    5949 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:41:08.524341    5949 start.go:729] Will try again in 5 seconds ...
	I1001 12:41:13.526372    5949 start.go:360] acquireMachinesLock for kubenet-298000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:41:13.526928    5949 start.go:364] duration metric: took 460.625µs to acquireMachinesLock for "kubenet-298000"
	I1001 12:41:13.527121    5949 start.go:93] Provisioning new machine with config: &{Name:kubenet-298000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:kubenet-298000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 12:41:13.527393    5949 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 12:41:13.538143    5949 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1001 12:41:13.587581    5949 start.go:159] libmachine.API.Create for "kubenet-298000" (driver="qemu2")
	I1001 12:41:13.587642    5949 client.go:168] LocalClient.Create starting
	I1001 12:41:13.587771    5949 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem
	I1001 12:41:13.587840    5949 main.go:141] libmachine: Decoding PEM data...
	I1001 12:41:13.587858    5949 main.go:141] libmachine: Parsing certificate...
	I1001 12:41:13.587920    5949 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/cert.pem
	I1001 12:41:13.587968    5949 main.go:141] libmachine: Decoding PEM data...
	I1001 12:41:13.587982    5949 main.go:141] libmachine: Parsing certificate...
	I1001 12:41:13.588562    5949 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19736-1073/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 12:41:13.756761    5949 main.go:141] libmachine: Creating SSH key...
	I1001 12:41:13.906937    5949 main.go:141] libmachine: Creating Disk image...
	I1001 12:41:13.906945    5949 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 12:41:13.907204    5949 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kubenet-298000/disk.qcow2.raw /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kubenet-298000/disk.qcow2
	I1001 12:41:13.916922    5949 main.go:141] libmachine: STDOUT: 
	I1001 12:41:13.916941    5949 main.go:141] libmachine: STDERR: 
	I1001 12:41:13.916999    5949 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kubenet-298000/disk.qcow2 +20000M
	I1001 12:41:13.924793    5949 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 12:41:13.924809    5949 main.go:141] libmachine: STDERR: 
	I1001 12:41:13.924822    5949 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kubenet-298000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kubenet-298000/disk.qcow2
	I1001 12:41:13.924829    5949 main.go:141] libmachine: Starting QEMU VM...
	I1001 12:41:13.924838    5949 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:41:13.924867    5949 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kubenet-298000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kubenet-298000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kubenet-298000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:a1:88:45:fd:5c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/kubenet-298000/disk.qcow2
	I1001 12:41:13.926530    5949 main.go:141] libmachine: STDOUT: 
	I1001 12:41:13.926545    5949 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:41:13.926557    5949 client.go:171] duration metric: took 338.917584ms to LocalClient.Create
	I1001 12:41:15.928715    5949 start.go:128] duration metric: took 2.40134075s to createHost
	I1001 12:41:15.928788    5949 start.go:83] releasing machines lock for "kubenet-298000", held for 2.401897791s
	W1001 12:41:15.929200    5949 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-298000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-298000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:41:15.946901    5949 out.go:201] 
	W1001 12:41:15.959934    5949 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 12:41:15.959962    5949 out.go:270] * 
	* 
	W1001 12:41:15.962560    5949 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 12:41:15.973846    5949 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-166000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-166000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.832171834s)

                                                
                                                
-- stdout --
	* [old-k8s-version-166000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-166000" primary control-plane node in "old-k8s-version-166000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-166000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:41:18.263405    6072 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:41:18.263569    6072 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:41:18.263572    6072 out.go:358] Setting ErrFile to fd 2...
	I1001 12:41:18.263575    6072 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:41:18.263699    6072 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:41:18.264818    6072 out.go:352] Setting JSON to false
	I1001 12:41:18.281108    6072 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4243,"bootTime":1727807435,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1001 12:41:18.281184    6072 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 12:41:18.287274    6072 out.go:177] * [old-k8s-version-166000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 12:41:18.296052    6072 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 12:41:18.296136    6072 notify.go:220] Checking for updates...
	I1001 12:41:18.303046    6072 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	I1001 12:41:18.306067    6072 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 12:41:18.309089    6072 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 12:41:18.312093    6072 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	I1001 12:41:18.315051    6072 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 12:41:18.318425    6072 config.go:182] Loaded profile config "multinode-301000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:41:18.318495    6072 config.go:182] Loaded profile config "stopped-upgrade-340000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1001 12:41:18.318542    6072 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 12:41:18.322064    6072 out.go:177] * Using the qemu2 driver based on user configuration
	I1001 12:41:18.329086    6072 start.go:297] selected driver: qemu2
	I1001 12:41:18.329093    6072 start.go:901] validating driver "qemu2" against <nil>
	I1001 12:41:18.329099    6072 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 12:41:18.331104    6072 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 12:41:18.334108    6072 out.go:177] * Automatically selected the socket_vmnet network
	I1001 12:41:18.337184    6072 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 12:41:18.337208    6072 cni.go:84] Creating CNI manager for ""
	I1001 12:41:18.337238    6072 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1001 12:41:18.337268    6072 start.go:340] cluster config:
	{Name:old-k8s-version-166000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-166000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin
/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 12:41:18.340595    6072 iso.go:125] acquiring lock: {Name:mk749d3a5db31c259cbd6465e91cf5073e7cc750 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:41:18.348061    6072 out.go:177] * Starting "old-k8s-version-166000" primary control-plane node in "old-k8s-version-166000" cluster
	I1001 12:41:18.352058    6072 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1001 12:41:18.352071    6072 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1001 12:41:18.352078    6072 cache.go:56] Caching tarball of preloaded images
	I1001 12:41:18.352130    6072 preload.go:172] Found /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 12:41:18.352135    6072 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1001 12:41:18.352184    6072 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/old-k8s-version-166000/config.json ...
	I1001 12:41:18.352193    6072 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/old-k8s-version-166000/config.json: {Name:mkc25ec4899f93a1b5231428e126f850eb148de3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 12:41:18.352399    6072 start.go:360] acquireMachinesLock for old-k8s-version-166000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:41:18.352429    6072 start.go:364] duration metric: took 24.291µs to acquireMachinesLock for "old-k8s-version-166000"
	I1001 12:41:18.352440    6072 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-166000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-166000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Moun
tOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 12:41:18.352473    6072 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 12:41:18.360051    6072 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 12:41:18.375291    6072 start.go:159] libmachine.API.Create for "old-k8s-version-166000" (driver="qemu2")
	I1001 12:41:18.375324    6072 client.go:168] LocalClient.Create starting
	I1001 12:41:18.375384    6072 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem
	I1001 12:41:18.375413    6072 main.go:141] libmachine: Decoding PEM data...
	I1001 12:41:18.375423    6072 main.go:141] libmachine: Parsing certificate...
	I1001 12:41:18.375467    6072 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/cert.pem
	I1001 12:41:18.375490    6072 main.go:141] libmachine: Decoding PEM data...
	I1001 12:41:18.375499    6072 main.go:141] libmachine: Parsing certificate...
	I1001 12:41:18.375848    6072 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19736-1073/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 12:41:18.535012    6072 main.go:141] libmachine: Creating SSH key...
	I1001 12:41:18.579195    6072 main.go:141] libmachine: Creating Disk image...
	I1001 12:41:18.579202    6072 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 12:41:18.579424    6072 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/old-k8s-version-166000/disk.qcow2.raw /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/old-k8s-version-166000/disk.qcow2
	I1001 12:41:18.588561    6072 main.go:141] libmachine: STDOUT: 
	I1001 12:41:18.588580    6072 main.go:141] libmachine: STDERR: 
	I1001 12:41:18.588634    6072 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/old-k8s-version-166000/disk.qcow2 +20000M
	I1001 12:41:18.596971    6072 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 12:41:18.596985    6072 main.go:141] libmachine: STDERR: 
	I1001 12:41:18.597001    6072 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/old-k8s-version-166000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/old-k8s-version-166000/disk.qcow2
	I1001 12:41:18.597008    6072 main.go:141] libmachine: Starting QEMU VM...
	I1001 12:41:18.597021    6072 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:41:18.597051    6072 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/old-k8s-version-166000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/old-k8s-version-166000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/old-k8s-version-166000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:11:fe:79:92:24 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/old-k8s-version-166000/disk.qcow2
	I1001 12:41:18.598708    6072 main.go:141] libmachine: STDOUT: 
	I1001 12:41:18.598723    6072 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:41:18.598742    6072 client.go:171] duration metric: took 223.416375ms to LocalClient.Create
	I1001 12:41:20.600906    6072 start.go:128] duration metric: took 2.248456917s to createHost
	I1001 12:41:20.600983    6072 start.go:83] releasing machines lock for "old-k8s-version-166000", held for 2.248602958s
	W1001 12:41:20.601106    6072 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:41:20.614411    6072 out.go:177] * Deleting "old-k8s-version-166000" in qemu2 ...
	W1001 12:41:20.646446    6072 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:41:20.646493    6072 start.go:729] Will try again in 5 seconds ...
	I1001 12:41:25.648531    6072 start.go:360] acquireMachinesLock for old-k8s-version-166000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:41:25.649054    6072 start.go:364] duration metric: took 441.666µs to acquireMachinesLock for "old-k8s-version-166000"
	I1001 12:41:25.649163    6072 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-166000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-166000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Moun
tOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 12:41:25.649414    6072 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 12:41:25.658896    6072 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 12:41:25.701539    6072 start.go:159] libmachine.API.Create for "old-k8s-version-166000" (driver="qemu2")
	I1001 12:41:25.701581    6072 client.go:168] LocalClient.Create starting
	I1001 12:41:25.701689    6072 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem
	I1001 12:41:25.701746    6072 main.go:141] libmachine: Decoding PEM data...
	I1001 12:41:25.701760    6072 main.go:141] libmachine: Parsing certificate...
	I1001 12:41:25.701898    6072 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/cert.pem
	I1001 12:41:25.701938    6072 main.go:141] libmachine: Decoding PEM data...
	I1001 12:41:25.701952    6072 main.go:141] libmachine: Parsing certificate...
	I1001 12:41:25.702462    6072 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19736-1073/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 12:41:25.869114    6072 main.go:141] libmachine: Creating SSH key...
	I1001 12:41:25.999614    6072 main.go:141] libmachine: Creating Disk image...
	I1001 12:41:25.999624    6072 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 12:41:25.999829    6072 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/old-k8s-version-166000/disk.qcow2.raw /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/old-k8s-version-166000/disk.qcow2
	I1001 12:41:26.009220    6072 main.go:141] libmachine: STDOUT: 
	I1001 12:41:26.009248    6072 main.go:141] libmachine: STDERR: 
	I1001 12:41:26.009299    6072 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/old-k8s-version-166000/disk.qcow2 +20000M
	I1001 12:41:26.017244    6072 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 12:41:26.017258    6072 main.go:141] libmachine: STDERR: 
	I1001 12:41:26.017269    6072 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/old-k8s-version-166000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/old-k8s-version-166000/disk.qcow2
	I1001 12:41:26.017274    6072 main.go:141] libmachine: Starting QEMU VM...
	I1001 12:41:26.017285    6072 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:41:26.017314    6072 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/old-k8s-version-166000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/old-k8s-version-166000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/old-k8s-version-166000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:27:e0:aa:d9:0d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/old-k8s-version-166000/disk.qcow2
	I1001 12:41:26.018969    6072 main.go:141] libmachine: STDOUT: 
	I1001 12:41:26.018984    6072 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:41:26.018998    6072 client.go:171] duration metric: took 317.420333ms to LocalClient.Create
	I1001 12:41:28.020586    6072 start.go:128] duration metric: took 2.371205167s to createHost
	I1001 12:41:28.020655    6072 start.go:83] releasing machines lock for "old-k8s-version-166000", held for 2.371608s
	W1001 12:41:28.020843    6072 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-166000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-166000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:41:28.029353    6072 out.go:201] 
	W1001 12:41:28.043477    6072 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 12:41:28.043505    6072 out.go:270] * 
	* 
	W1001 12:41:28.045208    6072 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 12:41:28.055315    6072 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-166000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-166000 -n old-k8s-version-166000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-166000 -n old-k8s-version-166000: exit status 7 (58.342625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-166000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-166000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-166000 create -f testdata/busybox.yaml: exit status 1 (29.68225ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-166000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-166000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-166000 -n old-k8s-version-166000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-166000 -n old-k8s-version-166000: exit status 7 (30.792208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-166000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-166000 -n old-k8s-version-166000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-166000 -n old-k8s-version-166000: exit status 7 (29.573708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-166000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-166000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-166000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-166000 describe deploy/metrics-server -n kube-system: exit status 1 (27.39325ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-166000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-166000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-166000 -n old-k8s-version-166000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-166000 -n old-k8s-version-166000: exit status 7 (29.673084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-166000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-166000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-166000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.202508416s)

                                                
                                                
-- stdout --
	* [old-k8s-version-166000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-166000" primary control-plane node in "old-k8s-version-166000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-166000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-166000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:41:31.940314    6129 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:41:31.940444    6129 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:41:31.940450    6129 out.go:358] Setting ErrFile to fd 2...
	I1001 12:41:31.940453    6129 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:41:31.940598    6129 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:41:31.941585    6129 out.go:352] Setting JSON to false
	I1001 12:41:31.957658    6129 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4256,"bootTime":1727807435,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1001 12:41:31.957737    6129 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 12:41:31.960006    6129 out.go:177] * [old-k8s-version-166000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 12:41:31.967141    6129 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 12:41:31.967148    6129 notify.go:220] Checking for updates...
	I1001 12:41:31.974040    6129 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	I1001 12:41:31.977034    6129 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 12:41:31.979974    6129 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 12:41:31.983012    6129 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	I1001 12:41:31.986038    6129 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 12:41:31.987918    6129 config.go:182] Loaded profile config "old-k8s-version-166000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1001 12:41:31.991031    6129 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1001 12:41:31.994032    6129 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 12:41:31.997838    6129 out.go:177] * Using the qemu2 driver based on existing profile
	I1001 12:41:32.005045    6129 start.go:297] selected driver: qemu2
	I1001 12:41:32.005051    6129 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-166000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-166000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 12:41:32.005118    6129 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 12:41:32.007314    6129 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 12:41:32.007341    6129 cni.go:84] Creating CNI manager for ""
	I1001 12:41:32.007362    6129 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1001 12:41:32.007392    6129 start.go:340] cluster config:
	{Name:old-k8s-version-166000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-166000 Namespace:defaul
t APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 12:41:32.010952    6129 iso.go:125] acquiring lock: {Name:mk749d3a5db31c259cbd6465e91cf5073e7cc750 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:41:32.018014    6129 out.go:177] * Starting "old-k8s-version-166000" primary control-plane node in "old-k8s-version-166000" cluster
	I1001 12:41:32.022067    6129 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1001 12:41:32.022081    6129 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1001 12:41:32.022090    6129 cache.go:56] Caching tarball of preloaded images
	I1001 12:41:32.022173    6129 preload.go:172] Found /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 12:41:32.022182    6129 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1001 12:41:32.022252    6129 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/old-k8s-version-166000/config.json ...
	I1001 12:41:32.022792    6129 start.go:360] acquireMachinesLock for old-k8s-version-166000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:41:32.022821    6129 start.go:364] duration metric: took 22.083µs to acquireMachinesLock for "old-k8s-version-166000"
	I1001 12:41:32.022828    6129 start.go:96] Skipping create...Using existing machine configuration
	I1001 12:41:32.022833    6129 fix.go:54] fixHost starting: 
	I1001 12:41:32.022956    6129 fix.go:112] recreateIfNeeded on old-k8s-version-166000: state=Stopped err=<nil>
	W1001 12:41:32.022964    6129 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 12:41:32.027040    6129 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-166000" ...
	I1001 12:41:32.035024    6129 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:41:32.035064    6129 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/old-k8s-version-166000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/old-k8s-version-166000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/old-k8s-version-166000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:27:e0:aa:d9:0d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/old-k8s-version-166000/disk.qcow2
	I1001 12:41:32.036936    6129 main.go:141] libmachine: STDOUT: 
	I1001 12:41:32.037034    6129 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:41:32.037064    6129 fix.go:56] duration metric: took 14.231916ms for fixHost
	I1001 12:41:32.037069    6129 start.go:83] releasing machines lock for "old-k8s-version-166000", held for 14.244833ms
	W1001 12:41:32.037076    6129 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 12:41:32.037117    6129 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:41:32.037122    6129 start.go:729] Will try again in 5 seconds ...
	I1001 12:41:37.037212    6129 start.go:360] acquireMachinesLock for old-k8s-version-166000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:41:37.037684    6129 start.go:364] duration metric: took 353.917µs to acquireMachinesLock for "old-k8s-version-166000"
	I1001 12:41:37.037804    6129 start.go:96] Skipping create...Using existing machine configuration
	I1001 12:41:37.037826    6129 fix.go:54] fixHost starting: 
	I1001 12:41:37.038555    6129 fix.go:112] recreateIfNeeded on old-k8s-version-166000: state=Stopped err=<nil>
	W1001 12:41:37.038585    6129 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 12:41:37.061643    6129 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-166000" ...
	I1001 12:41:37.066505    6129 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:41:37.066688    6129 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/old-k8s-version-166000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/old-k8s-version-166000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/old-k8s-version-166000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:27:e0:aa:d9:0d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/old-k8s-version-166000/disk.qcow2
	I1001 12:41:37.076644    6129 main.go:141] libmachine: STDOUT: 
	I1001 12:41:37.076700    6129 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:41:37.076810    6129 fix.go:56] duration metric: took 38.985667ms for fixHost
	I1001 12:41:37.076829    6129 start.go:83] releasing machines lock for "old-k8s-version-166000", held for 39.122917ms
	W1001 12:41:37.077027    6129 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-166000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-166000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:41:37.086300    6129 out.go:201] 
	W1001 12:41:37.090463    6129 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 12:41:37.090486    6129 out.go:270] * 
	* 
	W1001 12:41:37.092490    6129 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 12:41:37.102483    6129 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-166000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-166000 -n old-k8s-version-166000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-166000 -n old-k8s-version-166000: exit status 7 (62.08175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-166000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (10.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-877000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-877000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.973769708s)

                                                
                                                
-- stdout --
	* [no-preload-877000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-877000" primary control-plane node in "no-preload-877000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-877000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:41:33.001021    6141 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:41:33.001236    6141 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:41:33.001239    6141 out.go:358] Setting ErrFile to fd 2...
	I1001 12:41:33.001241    6141 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:41:33.001369    6141 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:41:33.002440    6141 out.go:352] Setting JSON to false
	I1001 12:41:33.018608    6141 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4258,"bootTime":1727807435,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1001 12:41:33.018695    6141 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 12:41:33.021786    6141 out.go:177] * [no-preload-877000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 12:41:33.029762    6141 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 12:41:33.029824    6141 notify.go:220] Checking for updates...
	I1001 12:41:33.037428    6141 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	I1001 12:41:33.040780    6141 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 12:41:33.043782    6141 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 12:41:33.046854    6141 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	I1001 12:41:33.049779    6141 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 12:41:33.053093    6141 config.go:182] Loaded profile config "multinode-301000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:41:33.053179    6141 config.go:182] Loaded profile config "old-k8s-version-166000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1001 12:41:33.053224    6141 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 12:41:33.057760    6141 out.go:177] * Using the qemu2 driver based on user configuration
	I1001 12:41:33.064785    6141 start.go:297] selected driver: qemu2
	I1001 12:41:33.064790    6141 start.go:901] validating driver "qemu2" against <nil>
	I1001 12:41:33.064796    6141 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 12:41:33.066898    6141 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 12:41:33.069786    6141 out.go:177] * Automatically selected the socket_vmnet network
	I1001 12:41:33.072829    6141 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 12:41:33.072846    6141 cni.go:84] Creating CNI manager for ""
	I1001 12:41:33.072865    6141 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 12:41:33.072875    6141 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1001 12:41:33.072910    6141 start.go:340] cluster config:
	{Name:no-preload-877000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-877000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 12:41:33.076628    6141 iso.go:125] acquiring lock: {Name:mk749d3a5db31c259cbd6465e91cf5073e7cc750 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:41:33.083741    6141 out.go:177] * Starting "no-preload-877000" primary control-plane node in "no-preload-877000" cluster
	I1001 12:41:33.087761    6141 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 12:41:33.087831    6141 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/no-preload-877000/config.json ...
	I1001 12:41:33.087857    6141 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/no-preload-877000/config.json: {Name:mk6549d2fadaf15f6b9bb98541fe8fb3a41b2ab1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 12:41:33.087852    6141 cache.go:107] acquiring lock: {Name:mk6c1930d14b46ca06bda2cab6fa5b0fecacbe45 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:41:33.087870    6141 cache.go:107] acquiring lock: {Name:mk76d730e38b03730872851b5b5bb1860e206f80 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:41:33.087886    6141 cache.go:107] acquiring lock: {Name:mkf5130e27dbfbf3d0acb12c8c0ee294eda3f4e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:41:33.087940    6141 cache.go:115] /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1001 12:41:33.087946    6141 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 100.5µs
	I1001 12:41:33.087953    6141 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1001 12:41:33.087960    6141 cache.go:107] acquiring lock: {Name:mkb8a01bc5b9a4e81d2dfe0ecd83d552e4e7c4b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:41:33.088020    6141 cache.go:107] acquiring lock: {Name:mk6e5fc2ce366c949dff20c04fc9979b8e8f6ebb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:41:33.088047    6141 cache.go:107] acquiring lock: {Name:mkc2d80ef95a069a90e06c74b16f1e8f482d8608 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:41:33.088060    6141 cache.go:107] acquiring lock: {Name:mkedb11964e06526288d3a287309c73a9ae6977d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:41:33.088091    6141 cache.go:107] acquiring lock: {Name:mkcb3e072589269f9980cf0601c7772d2a5ddc63 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:41:33.088261    6141 start.go:360] acquireMachinesLock for no-preload-877000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:41:33.088280    6141 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1001 12:41:33.088291    6141 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1001 12:41:33.088325    6141 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I1001 12:41:33.088391    6141 start.go:364] duration metric: took 118.792µs to acquireMachinesLock for "no-preload-877000"
	I1001 12:41:33.088432    6141 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I1001 12:41:33.088387    6141 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I1001 12:41:33.088466    6141 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1001 12:41:33.088500    6141 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1001 12:41:33.088463    6141 start.go:93] Provisioning new machine with config: &{Name:no-preload-877000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.31.1 ClusterName:no-preload-877000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 12:41:33.088563    6141 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 12:41:33.091751    6141 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 12:41:33.094780    6141 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I1001 12:41:33.094949    6141 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I1001 12:41:33.095143    6141 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1001 12:41:33.095375    6141 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1001 12:41:33.095423    6141 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I1001 12:41:33.095450    6141 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1001 12:41:33.096828    6141 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1001 12:41:33.109937    6141 start.go:159] libmachine.API.Create for "no-preload-877000" (driver="qemu2")
	I1001 12:41:33.109974    6141 client.go:168] LocalClient.Create starting
	I1001 12:41:33.110067    6141 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem
	I1001 12:41:33.110099    6141 main.go:141] libmachine: Decoding PEM data...
	I1001 12:41:33.110109    6141 main.go:141] libmachine: Parsing certificate...
	I1001 12:41:33.110157    6141 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/cert.pem
	I1001 12:41:33.110181    6141 main.go:141] libmachine: Decoding PEM data...
	I1001 12:41:33.110188    6141 main.go:141] libmachine: Parsing certificate...
	I1001 12:41:33.110629    6141 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19736-1073/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 12:41:33.273510    6141 main.go:141] libmachine: Creating SSH key...
	I1001 12:41:33.477334    6141 main.go:141] libmachine: Creating Disk image...
	I1001 12:41:33.477351    6141 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 12:41:33.477551    6141 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/no-preload-877000/disk.qcow2.raw /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/no-preload-877000/disk.qcow2
	I1001 12:41:33.486728    6141 main.go:141] libmachine: STDOUT: 
	I1001 12:41:33.486745    6141 main.go:141] libmachine: STDERR: 
	I1001 12:41:33.486803    6141 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/no-preload-877000/disk.qcow2 +20000M
	I1001 12:41:33.494693    6141 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 12:41:33.494711    6141 main.go:141] libmachine: STDERR: 
	I1001 12:41:33.494725    6141 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/no-preload-877000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/no-preload-877000/disk.qcow2
	I1001 12:41:33.494731    6141 main.go:141] libmachine: Starting QEMU VM...
	I1001 12:41:33.494742    6141 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:41:33.494765    6141 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/no-preload-877000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/no-preload-877000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/no-preload-877000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:89:f6:ec:3f:70 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/no-preload-877000/disk.qcow2
	I1001 12:41:33.496412    6141 main.go:141] libmachine: STDOUT: 
	I1001 12:41:33.496427    6141 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:41:33.496448    6141 client.go:171] duration metric: took 386.478167ms to LocalClient.Create
	I1001 12:41:35.040572    6141 cache.go:162] opening:  /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I1001 12:41:35.167205    6141 cache.go:162] opening:  /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1
	I1001 12:41:35.185836    6141 cache.go:162] opening:  /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3
	I1001 12:41:35.200532    6141 cache.go:162] opening:  /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1
	I1001 12:41:35.225092    6141 cache.go:157] /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I1001 12:41:35.225156    6141 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 2.137244584s
	I1001 12:41:35.225189    6141 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I1001 12:41:35.496705    6141 start.go:128] duration metric: took 2.40817925s to createHost
	I1001 12:41:35.496763    6141 start.go:83] releasing machines lock for "no-preload-877000", held for 2.408414542s
	W1001 12:41:35.496829    6141 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:41:35.515713    6141 out.go:177] * Deleting "no-preload-877000" in qemu2 ...
	W1001 12:41:35.549013    6141 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:41:35.549034    6141 start.go:729] Will try again in 5 seconds ...
	I1001 12:41:35.695667    6141 cache.go:162] opening:  /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1
	I1001 12:41:35.715610    6141 cache.go:162] opening:  /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1
	I1001 12:41:35.731060    6141 cache.go:162] opening:  /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I1001 12:41:38.240411    6141 cache.go:157] /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I1001 12:41:38.240430    6141 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 5.152543583s
	I1001 12:41:38.240444    6141 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I1001 12:41:39.425474    6141 cache.go:157] /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I1001 12:41:39.425523    6141 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 6.337642833s
	I1001 12:41:39.425548    6141 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I1001 12:41:40.079971    6141 cache.go:157] /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I1001 12:41:40.080018    6141 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 6.992129667s
	I1001 12:41:40.080044    6141 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I1001 12:41:40.549073    6141 start.go:360] acquireMachinesLock for no-preload-877000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:41:40.549464    6141 start.go:364] duration metric: took 322.5µs to acquireMachinesLock for "no-preload-877000"
	I1001 12:41:40.549618    6141 start.go:93] Provisioning new machine with config: &{Name:no-preload-877000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.31.1 ClusterName:no-preload-877000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 12:41:40.549845    6141 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 12:41:40.559463    6141 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 12:41:40.612028    6141 start.go:159] libmachine.API.Create for "no-preload-877000" (driver="qemu2")
	I1001 12:41:40.612124    6141 client.go:168] LocalClient.Create starting
	I1001 12:41:40.612262    6141 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem
	I1001 12:41:40.612325    6141 main.go:141] libmachine: Decoding PEM data...
	I1001 12:41:40.612346    6141 main.go:141] libmachine: Parsing certificate...
	I1001 12:41:40.612412    6141 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/cert.pem
	I1001 12:41:40.612456    6141 main.go:141] libmachine: Decoding PEM data...
	I1001 12:41:40.612469    6141 main.go:141] libmachine: Parsing certificate...
	I1001 12:41:40.613007    6141 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19736-1073/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 12:41:40.782626    6141 main.go:141] libmachine: Creating SSH key...
	I1001 12:41:40.863258    6141 cache.go:157] /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I1001 12:41:40.863277    6141 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 7.775616208s
	I1001 12:41:40.863285    6141 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I1001 12:41:40.876322    6141 main.go:141] libmachine: Creating Disk image...
	I1001 12:41:40.876328    6141 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 12:41:40.876505    6141 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/no-preload-877000/disk.qcow2.raw /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/no-preload-877000/disk.qcow2
	I1001 12:41:40.885750    6141 main.go:141] libmachine: STDOUT: 
	I1001 12:41:40.885766    6141 main.go:141] libmachine: STDERR: 
	I1001 12:41:40.885821    6141 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/no-preload-877000/disk.qcow2 +20000M
	I1001 12:41:40.893780    6141 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 12:41:40.893792    6141 main.go:141] libmachine: STDERR: 
	I1001 12:41:40.893806    6141 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/no-preload-877000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/no-preload-877000/disk.qcow2
	I1001 12:41:40.893833    6141 main.go:141] libmachine: Starting QEMU VM...
	I1001 12:41:40.893842    6141 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:41:40.893880    6141 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/no-preload-877000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/no-preload-877000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/no-preload-877000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:81:c9:7a:8c:1d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/no-preload-877000/disk.qcow2
	I1001 12:41:40.895624    6141 main.go:141] libmachine: STDOUT: 
	I1001 12:41:40.895638    6141 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:41:40.895647    6141 client.go:171] duration metric: took 283.524875ms to LocalClient.Create
	I1001 12:41:41.037580    6141 cache.go:157] /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I1001 12:41:41.037606    6141 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 7.949952875s
	I1001 12:41:41.037624    6141 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I1001 12:41:42.896939    6141 start.go:128] duration metric: took 2.347092166s to createHost
	I1001 12:41:42.896994    6141 start.go:83] releasing machines lock for "no-preload-877000", held for 2.347567333s
	W1001 12:41:42.897555    6141 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-877000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-877000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:41:42.906801    6141 out.go:201] 
	W1001 12:41:42.916997    6141 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 12:41:42.917031    6141 out.go:270] * 
	* 
	W1001 12:41:42.919562    6141 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 12:41:42.928756    6141 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-877000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-877000 -n no-preload-877000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-877000 -n no-preload-877000: exit status 7 (63.702625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-877000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (10.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-166000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-166000 -n old-k8s-version-166000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-166000 -n old-k8s-version-166000: exit status 7 (32.572833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-166000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-166000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-166000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-166000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.91975ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-166000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-166000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-166000 -n old-k8s-version-166000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-166000 -n old-k8s-version-166000: exit status 7 (28.9245ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-166000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-166000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-166000 -n old-k8s-version-166000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-166000 -n old-k8s-version-166000: exit status 7 (28.96975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-166000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-166000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-166000 --alsologtostderr -v=1: exit status 83 (43.509375ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-166000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-166000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:41:37.363429    6191 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:41:37.363793    6191 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:41:37.363797    6191 out.go:358] Setting ErrFile to fd 2...
	I1001 12:41:37.363800    6191 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:41:37.364259    6191 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:41:37.364518    6191 out.go:352] Setting JSON to false
	I1001 12:41:37.364536    6191 mustload.go:65] Loading cluster: old-k8s-version-166000
	I1001 12:41:37.364942    6191 config.go:182] Loaded profile config "old-k8s-version-166000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1001 12:41:37.369145    6191 out.go:177] * The control-plane node old-k8s-version-166000 host is not running: state=Stopped
	I1001 12:41:37.374760    6191 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-166000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-166000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-166000 -n old-k8s-version-166000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-166000 -n old-k8s-version-166000: exit status 7 (29.185792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-166000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-166000 -n old-k8s-version-166000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-166000 -n old-k8s-version-166000: exit status 7 (29.648ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-166000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-044000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-044000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.894059s)

                                                
                                                
-- stdout --
	* [embed-certs-044000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-044000" primary control-plane node in "embed-certs-044000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-044000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:41:37.687384    6209 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:41:37.687499    6209 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:41:37.687502    6209 out.go:358] Setting ErrFile to fd 2...
	I1001 12:41:37.687504    6209 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:41:37.687649    6209 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:41:37.688799    6209 out.go:352] Setting JSON to false
	I1001 12:41:37.705079    6209 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4262,"bootTime":1727807435,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1001 12:41:37.705142    6209 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 12:41:37.709998    6209 out.go:177] * [embed-certs-044000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 12:41:37.718263    6209 notify.go:220] Checking for updates...
	I1001 12:41:37.723164    6209 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 12:41:37.730119    6209 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	I1001 12:41:37.734107    6209 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 12:41:37.741035    6209 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 12:41:37.749168    6209 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	I1001 12:41:37.753134    6209 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 12:41:37.756464    6209 config.go:182] Loaded profile config "multinode-301000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:41:37.756540    6209 config.go:182] Loaded profile config "no-preload-877000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:41:37.756586    6209 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 12:41:37.759118    6209 out.go:177] * Using the qemu2 driver based on user configuration
	I1001 12:41:37.766050    6209 start.go:297] selected driver: qemu2
	I1001 12:41:37.766056    6209 start.go:901] validating driver "qemu2" against <nil>
	I1001 12:41:37.766062    6209 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 12:41:37.768184    6209 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 12:41:37.772175    6209 out.go:177] * Automatically selected the socket_vmnet network
	I1001 12:41:37.775211    6209 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 12:41:37.775234    6209 cni.go:84] Creating CNI manager for ""
	I1001 12:41:37.775258    6209 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 12:41:37.775264    6209 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1001 12:41:37.775306    6209 start.go:340] cluster config:
	{Name:embed-certs-044000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-044000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 12:41:37.779163    6209 iso.go:125] acquiring lock: {Name:mk749d3a5db31c259cbd6465e91cf5073e7cc750 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:41:37.788119    6209 out.go:177] * Starting "embed-certs-044000" primary control-plane node in "embed-certs-044000" cluster
	I1001 12:41:37.791169    6209 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 12:41:37.791203    6209 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1001 12:41:37.791223    6209 cache.go:56] Caching tarball of preloaded images
	I1001 12:41:37.791293    6209 preload.go:172] Found /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 12:41:37.791300    6209 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1001 12:41:37.791375    6209 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/embed-certs-044000/config.json ...
	I1001 12:41:37.791387    6209 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/embed-certs-044000/config.json: {Name:mk9c1cbbf4201c72171b689a7f4489d5d6237f21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 12:41:37.791691    6209 start.go:360] acquireMachinesLock for embed-certs-044000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:41:37.791724    6209 start.go:364] duration metric: took 27.375µs to acquireMachinesLock for "embed-certs-044000"
	I1001 12:41:37.791736    6209 start.go:93] Provisioning new machine with config: &{Name:embed-certs-044000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.31.1 ClusterName:embed-certs-044000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 12:41:37.791764    6209 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 12:41:37.796144    6209 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 12:41:37.813186    6209 start.go:159] libmachine.API.Create for "embed-certs-044000" (driver="qemu2")
	I1001 12:41:37.813219    6209 client.go:168] LocalClient.Create starting
	I1001 12:41:37.813281    6209 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem
	I1001 12:41:37.813314    6209 main.go:141] libmachine: Decoding PEM data...
	I1001 12:41:37.813323    6209 main.go:141] libmachine: Parsing certificate...
	I1001 12:41:37.813376    6209 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/cert.pem
	I1001 12:41:37.813399    6209 main.go:141] libmachine: Decoding PEM data...
	I1001 12:41:37.813409    6209 main.go:141] libmachine: Parsing certificate...
	I1001 12:41:37.813764    6209 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19736-1073/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 12:41:38.024460    6209 main.go:141] libmachine: Creating SSH key...
	I1001 12:41:38.103184    6209 main.go:141] libmachine: Creating Disk image...
	I1001 12:41:38.103191    6209 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 12:41:38.103351    6209 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/embed-certs-044000/disk.qcow2.raw /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/embed-certs-044000/disk.qcow2
	I1001 12:41:38.113123    6209 main.go:141] libmachine: STDOUT: 
	I1001 12:41:38.113149    6209 main.go:141] libmachine: STDERR: 
	I1001 12:41:38.113213    6209 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/embed-certs-044000/disk.qcow2 +20000M
	I1001 12:41:38.121082    6209 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 12:41:38.121101    6209 main.go:141] libmachine: STDERR: 
	I1001 12:41:38.121112    6209 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/embed-certs-044000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/embed-certs-044000/disk.qcow2
	I1001 12:41:38.121118    6209 main.go:141] libmachine: Starting QEMU VM...
	I1001 12:41:38.121130    6209 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:41:38.121161    6209 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/embed-certs-044000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/embed-certs-044000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/embed-certs-044000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:4b:54:e2:1b:c5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/embed-certs-044000/disk.qcow2
	I1001 12:41:38.122842    6209 main.go:141] libmachine: STDOUT: 
	I1001 12:41:38.122865    6209 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:41:38.122887    6209 client.go:171] duration metric: took 309.669875ms to LocalClient.Create
	I1001 12:41:40.125037    6209 start.go:128] duration metric: took 2.333301833s to createHost
	I1001 12:41:40.125080    6209 start.go:83] releasing machines lock for "embed-certs-044000", held for 2.333406208s
	W1001 12:41:40.125202    6209 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:41:40.136411    6209 out.go:177] * Deleting "embed-certs-044000" in qemu2 ...
	W1001 12:41:40.171198    6209 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:41:40.171224    6209 start.go:729] Will try again in 5 seconds ...
	I1001 12:41:45.173395    6209 start.go:360] acquireMachinesLock for embed-certs-044000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:41:45.173747    6209 start.go:364] duration metric: took 273.458µs to acquireMachinesLock for "embed-certs-044000"
	I1001 12:41:45.173808    6209 start.go:93] Provisioning new machine with config: &{Name:embed-certs-044000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.31.1 ClusterName:embed-certs-044000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 12:41:45.174089    6209 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 12:41:45.179757    6209 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 12:41:45.229602    6209 start.go:159] libmachine.API.Create for "embed-certs-044000" (driver="qemu2")
	I1001 12:41:45.229659    6209 client.go:168] LocalClient.Create starting
	I1001 12:41:45.229760    6209 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem
	I1001 12:41:45.229806    6209 main.go:141] libmachine: Decoding PEM data...
	I1001 12:41:45.229824    6209 main.go:141] libmachine: Parsing certificate...
	I1001 12:41:45.229897    6209 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/cert.pem
	I1001 12:41:45.229938    6209 main.go:141] libmachine: Decoding PEM data...
	I1001 12:41:45.229953    6209 main.go:141] libmachine: Parsing certificate...
	I1001 12:41:45.230497    6209 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19736-1073/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 12:41:45.409776    6209 main.go:141] libmachine: Creating SSH key...
	I1001 12:41:45.478347    6209 main.go:141] libmachine: Creating Disk image...
	I1001 12:41:45.478353    6209 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 12:41:45.478558    6209 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/embed-certs-044000/disk.qcow2.raw /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/embed-certs-044000/disk.qcow2
	I1001 12:41:45.487655    6209 main.go:141] libmachine: STDOUT: 
	I1001 12:41:45.487677    6209 main.go:141] libmachine: STDERR: 
	I1001 12:41:45.487730    6209 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/embed-certs-044000/disk.qcow2 +20000M
	I1001 12:41:45.495600    6209 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 12:41:45.495621    6209 main.go:141] libmachine: STDERR: 
	I1001 12:41:45.495632    6209 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/embed-certs-044000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/embed-certs-044000/disk.qcow2
	I1001 12:41:45.495637    6209 main.go:141] libmachine: Starting QEMU VM...
	I1001 12:41:45.495646    6209 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:41:45.495699    6209 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/embed-certs-044000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/embed-certs-044000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/embed-certs-044000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:92:8c:a4:cb:7e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/embed-certs-044000/disk.qcow2
	I1001 12:41:45.497308    6209 main.go:141] libmachine: STDOUT: 
	I1001 12:41:45.497323    6209 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:41:45.497342    6209 client.go:171] duration metric: took 267.678833ms to LocalClient.Create
	I1001 12:41:47.499488    6209 start.go:128] duration metric: took 2.325427291s to createHost
	I1001 12:41:47.499565    6209 start.go:83] releasing machines lock for "embed-certs-044000", held for 2.325855625s
	W1001 12:41:47.499905    6209 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-044000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-044000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:41:47.510697    6209 out.go:201] 
	W1001 12:41:47.523664    6209 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 12:41:47.523697    6209 out.go:270] * 
	* 
	W1001 12:41:47.526609    6209 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 12:41:47.538546    6209 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-044000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-044000 -n embed-certs-044000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-044000 -n embed-certs-044000: exit status 7 (62.756041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-044000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-877000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-877000 create -f testdata/busybox.yaml: exit status 1 (29.445709ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-877000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-877000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-877000 -n no-preload-877000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-877000 -n no-preload-877000: exit status 7 (29.246417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-877000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-877000 -n no-preload-877000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-877000 -n no-preload-877000: exit status 7 (28.66125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-877000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-877000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-877000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-877000 describe deploy/metrics-server -n kube-system: exit status 1 (26.218459ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-877000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-877000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-877000 -n no-preload-877000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-877000 -n no-preload-877000: exit status 7 (28.570792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-877000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-877000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-877000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.869650167s)

                                                
                                                
-- stdout --
	* [no-preload-877000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-877000" primary control-plane node in "no-preload-877000" cluster
	* Restarting existing qemu2 VM for "no-preload-877000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-877000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:41:46.759330    6267 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:41:46.759480    6267 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:41:46.759483    6267 out.go:358] Setting ErrFile to fd 2...
	I1001 12:41:46.759486    6267 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:41:46.759614    6267 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:41:46.760591    6267 out.go:352] Setting JSON to false
	I1001 12:41:46.776783    6267 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4271,"bootTime":1727807435,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1001 12:41:46.776858    6267 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 12:41:46.780793    6267 out.go:177] * [no-preload-877000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 12:41:46.787628    6267 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 12:41:46.787656    6267 notify.go:220] Checking for updates...
	I1001 12:41:46.795642    6267 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	I1001 12:41:46.798770    6267 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 12:41:46.801720    6267 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 12:41:46.804727    6267 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	I1001 12:41:46.807680    6267 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 12:41:46.811027    6267 config.go:182] Loaded profile config "no-preload-877000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:41:46.811307    6267 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 12:41:46.815742    6267 out.go:177] * Using the qemu2 driver based on existing profile
	I1001 12:41:46.822719    6267 start.go:297] selected driver: qemu2
	I1001 12:41:46.822726    6267 start.go:901] validating driver "qemu2" against &{Name:no-preload-877000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:no-preload-877000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 12:41:46.822791    6267 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 12:41:46.825073    6267 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 12:41:46.825099    6267 cni.go:84] Creating CNI manager for ""
	I1001 12:41:46.825119    6267 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 12:41:46.825150    6267 start.go:340] cluster config:
	{Name:no-preload-877000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-877000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 12:41:46.828751    6267 iso.go:125] acquiring lock: {Name:mk749d3a5db31c259cbd6465e91cf5073e7cc750 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:41:46.836689    6267 out.go:177] * Starting "no-preload-877000" primary control-plane node in "no-preload-877000" cluster
	I1001 12:41:46.840662    6267 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 12:41:46.840750    6267 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/no-preload-877000/config.json ...
	I1001 12:41:46.840798    6267 cache.go:107] acquiring lock: {Name:mk6c1930d14b46ca06bda2cab6fa5b0fecacbe45 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:41:46.840799    6267 cache.go:107] acquiring lock: {Name:mkcb3e072589269f9980cf0601c7772d2a5ddc63 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:41:46.840884    6267 cache.go:115] /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1001 12:41:46.840889    6267 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 96.458µs
	I1001 12:41:46.840898    6267 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1001 12:41:46.840904    6267 cache.go:115] /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I1001 12:41:46.840905    6267 cache.go:107] acquiring lock: {Name:mkc2d80ef95a069a90e06c74b16f1e8f482d8608 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:41:46.840912    6267 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 127.375µs
	I1001 12:41:46.840917    6267 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I1001 12:41:46.840925    6267 cache.go:107] acquiring lock: {Name:mkedb11964e06526288d3a287309c73a9ae6977d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:41:46.840934    6267 cache.go:107] acquiring lock: {Name:mk76d730e38b03730872851b5b5bb1860e206f80 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:41:46.840944    6267 cache.go:107] acquiring lock: {Name:mk6e5fc2ce366c949dff20c04fc9979b8e8f6ebb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:41:46.840968    6267 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1001 12:41:46.840954    6267 cache.go:107] acquiring lock: {Name:mkf5130e27dbfbf3d0acb12c8c0ee294eda3f4e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:41:46.841038    6267 cache.go:107] acquiring lock: {Name:mkb8a01bc5b9a4e81d2dfe0ecd83d552e4e7c4b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:41:46.841053    6267 cache.go:115] /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I1001 12:41:46.841061    6267 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 118.25µs
	I1001 12:41:46.841066    6267 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I1001 12:41:46.841073    6267 cache.go:115] /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I1001 12:41:46.841077    6267 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 166.792µs
	I1001 12:41:46.841080    6267 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I1001 12:41:46.841089    6267 cache.go:115] /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I1001 12:41:46.841096    6267 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 299.5µs
	I1001 12:41:46.841099    6267 cache.go:115] /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I1001 12:41:46.841101    6267 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I1001 12:41:46.841104    6267 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 266.959µs
	I1001 12:41:46.841112    6267 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I1001 12:41:46.841147    6267 cache.go:115] /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I1001 12:41:46.841165    6267 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 240.416µs
	I1001 12:41:46.841174    6267 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I1001 12:41:46.841254    6267 start.go:360] acquireMachinesLock for no-preload-877000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:41:46.844421    6267 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1001 12:41:47.499834    6267 start.go:364] duration metric: took 658.560208ms to acquireMachinesLock for "no-preload-877000"
	I1001 12:41:47.500065    6267 start.go:96] Skipping create...Using existing machine configuration
	I1001 12:41:47.500085    6267 fix.go:54] fixHost starting: 
	I1001 12:41:47.500831    6267 fix.go:112] recreateIfNeeded on no-preload-877000: state=Stopped err=<nil>
	W1001 12:41:47.500864    6267 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 12:41:47.519684    6267 out.go:177] * Restarting existing qemu2 VM for "no-preload-877000" ...
	I1001 12:41:47.526660    6267 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:41:47.526858    6267 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/no-preload-877000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/no-preload-877000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/no-preload-877000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:81:c9:7a:8c:1d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/no-preload-877000/disk.qcow2
	I1001 12:41:47.535870    6267 main.go:141] libmachine: STDOUT: 
	I1001 12:41:47.535943    6267 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:41:47.536063    6267 fix.go:56] duration metric: took 35.970292ms for fixHost
	I1001 12:41:47.536088    6267 start.go:83] releasing machines lock for "no-preload-877000", held for 36.167292ms
	W1001 12:41:47.536118    6267 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 12:41:47.536286    6267 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:41:47.536303    6267 start.go:729] Will try again in 5 seconds ...
	I1001 12:41:48.739651    6267 cache.go:162] opening:  /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I1001 12:41:52.536586    6267 start.go:360] acquireMachinesLock for no-preload-877000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:41:52.536973    6267 start.go:364] duration metric: took 303.458µs to acquireMachinesLock for "no-preload-877000"
	I1001 12:41:52.537104    6267 start.go:96] Skipping create...Using existing machine configuration
	I1001 12:41:52.537130    6267 fix.go:54] fixHost starting: 
	I1001 12:41:52.537846    6267 fix.go:112] recreateIfNeeded on no-preload-877000: state=Stopped err=<nil>
	W1001 12:41:52.537878    6267 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 12:41:52.547477    6267 out.go:177] * Restarting existing qemu2 VM for "no-preload-877000" ...
	I1001 12:41:52.552361    6267 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:41:52.552540    6267 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/no-preload-877000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/no-preload-877000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/no-preload-877000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:81:c9:7a:8c:1d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/no-preload-877000/disk.qcow2
	I1001 12:41:52.562508    6267 main.go:141] libmachine: STDOUT: 
	I1001 12:41:52.562600    6267 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:41:52.562678    6267 fix.go:56] duration metric: took 25.552583ms for fixHost
	I1001 12:41:52.562703    6267 start.go:83] releasing machines lock for "no-preload-877000", held for 25.705417ms
	W1001 12:41:52.562903    6267 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-877000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-877000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:41:52.570432    6267 out.go:201] 
	W1001 12:41:52.574503    6267 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 12:41:52.574548    6267 out.go:270] * 
	* 
	W1001 12:41:52.577321    6267 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 12:41:52.586537    6267 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-877000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-877000 -n no-preload-877000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-877000 -n no-preload-877000: exit status 7 (64.673458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-877000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-044000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-044000 create -f testdata/busybox.yaml: exit status 1 (30.43825ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-044000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-044000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-044000 -n embed-certs-044000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-044000 -n embed-certs-044000: exit status 7 (29.533958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-044000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-044000 -n embed-certs-044000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-044000 -n embed-certs-044000: exit status 7 (30.091417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-044000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-044000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-044000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-044000 describe deploy/metrics-server -n kube-system: exit status 1 (26.917625ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-044000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-044000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-044000 -n embed-certs-044000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-044000 -n embed-certs-044000: exit status 7 (28.516833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-044000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-044000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-044000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.209830875s)

                                                
                                                
-- stdout --
	* [embed-certs-044000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-044000" primary control-plane node in "embed-certs-044000" cluster
	* Restarting existing qemu2 VM for "embed-certs-044000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-044000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:41:50.967239    6314 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:41:50.967385    6314 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:41:50.967389    6314 out.go:358] Setting ErrFile to fd 2...
	I1001 12:41:50.967391    6314 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:41:50.967514    6314 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:41:50.968547    6314 out.go:352] Setting JSON to false
	I1001 12:41:50.984581    6314 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4275,"bootTime":1727807435,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1001 12:41:50.984656    6314 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 12:41:50.988739    6314 out.go:177] * [embed-certs-044000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 12:41:50.992585    6314 notify.go:220] Checking for updates...
	I1001 12:41:50.996544    6314 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 12:41:51.005572    6314 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	I1001 12:41:51.012574    6314 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 12:41:51.016531    6314 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 12:41:51.019592    6314 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	I1001 12:41:51.026568    6314 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 12:41:51.030794    6314 config.go:182] Loaded profile config "embed-certs-044000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:41:51.031076    6314 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 12:41:51.032621    6314 out.go:177] * Using the qemu2 driver based on existing profile
	I1001 12:41:51.039571    6314 start.go:297] selected driver: qemu2
	I1001 12:41:51.039576    6314 start.go:901] validating driver "qemu2" against &{Name:embed-certs-044000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:embed-certs-044000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Ce
rtExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 12:41:51.039632    6314 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 12:41:51.041798    6314 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 12:41:51.041824    6314 cni.go:84] Creating CNI manager for ""
	I1001 12:41:51.041846    6314 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 12:41:51.041871    6314 start.go:340] cluster config:
	{Name:embed-certs-044000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-044000 Namespace:default APISer
verHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 12:41:51.045335    6314 iso.go:125] acquiring lock: {Name:mk749d3a5db31c259cbd6465e91cf5073e7cc750 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:41:51.053575    6314 out.go:177] * Starting "embed-certs-044000" primary control-plane node in "embed-certs-044000" cluster
	I1001 12:41:51.056703    6314 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 12:41:51.056725    6314 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1001 12:41:51.056732    6314 cache.go:56] Caching tarball of preloaded images
	I1001 12:41:51.056794    6314 preload.go:172] Found /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 12:41:51.056800    6314 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1001 12:41:51.056856    6314 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/embed-certs-044000/config.json ...
	I1001 12:41:51.057352    6314 start.go:360] acquireMachinesLock for embed-certs-044000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:41:51.057379    6314 start.go:364] duration metric: took 20.834µs to acquireMachinesLock for "embed-certs-044000"
	I1001 12:41:51.057386    6314 start.go:96] Skipping create...Using existing machine configuration
	I1001 12:41:51.057391    6314 fix.go:54] fixHost starting: 
	I1001 12:41:51.057507    6314 fix.go:112] recreateIfNeeded on embed-certs-044000: state=Stopped err=<nil>
	W1001 12:41:51.057516    6314 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 12:41:51.065613    6314 out.go:177] * Restarting existing qemu2 VM for "embed-certs-044000" ...
	I1001 12:41:51.068646    6314 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:41:51.068679    6314 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/embed-certs-044000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/embed-certs-044000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/embed-certs-044000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:92:8c:a4:cb:7e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/embed-certs-044000/disk.qcow2
	I1001 12:41:51.070417    6314 main.go:141] libmachine: STDOUT: 
	I1001 12:41:51.070438    6314 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:41:51.070466    6314 fix.go:56] duration metric: took 13.0745ms for fixHost
	I1001 12:41:51.070471    6314 start.go:83] releasing machines lock for "embed-certs-044000", held for 13.088375ms
	W1001 12:41:51.070477    6314 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 12:41:51.070521    6314 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:41:51.070526    6314 start.go:729] Will try again in 5 seconds ...
	I1001 12:41:56.072563    6314 start.go:360] acquireMachinesLock for embed-certs-044000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:41:56.072956    6314 start.go:364] duration metric: took 309.917µs to acquireMachinesLock for "embed-certs-044000"
	I1001 12:41:56.073052    6314 start.go:96] Skipping create...Using existing machine configuration
	I1001 12:41:56.073075    6314 fix.go:54] fixHost starting: 
	I1001 12:41:56.073867    6314 fix.go:112] recreateIfNeeded on embed-certs-044000: state=Stopped err=<nil>
	W1001 12:41:56.073896    6314 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 12:41:56.095439    6314 out.go:177] * Restarting existing qemu2 VM for "embed-certs-044000" ...
	I1001 12:41:56.100218    6314 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:41:56.100432    6314 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/embed-certs-044000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/embed-certs-044000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/embed-certs-044000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:92:8c:a4:cb:7e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/embed-certs-044000/disk.qcow2
	I1001 12:41:56.109919    6314 main.go:141] libmachine: STDOUT: 
	I1001 12:41:56.109990    6314 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:41:56.110133    6314 fix.go:56] duration metric: took 37.059208ms for fixHost
	I1001 12:41:56.110156    6314 start.go:83] releasing machines lock for "embed-certs-044000", held for 37.178ms
	W1001 12:41:56.110350    6314 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-044000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-044000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:41:56.118228    6314 out.go:201] 
	W1001 12:41:56.121312    6314 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 12:41:56.121334    6314 out.go:270] * 
	* 
	W1001 12:41:56.123670    6314 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 12:41:56.137269    6314 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-044000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-044000 -n embed-certs-044000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-044000 -n embed-certs-044000: exit status 7 (68.396375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-044000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-877000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-877000 -n no-preload-877000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-877000 -n no-preload-877000: exit status 7 (32.017791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-877000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-877000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-877000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-877000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.789166ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-877000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-877000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-877000 -n no-preload-877000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-877000 -n no-preload-877000: exit status 7 (29.20325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-877000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-877000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-877000 -n no-preload-877000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-877000 -n no-preload-877000: exit status 7 (29.295792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-877000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-877000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-877000 --alsologtostderr -v=1: exit status 83 (39.253125ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-877000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-877000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:41:52.852959    6335 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:41:52.853108    6335 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:41:52.853111    6335 out.go:358] Setting ErrFile to fd 2...
	I1001 12:41:52.853113    6335 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:41:52.853231    6335 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:41:52.853463    6335 out.go:352] Setting JSON to false
	I1001 12:41:52.853474    6335 mustload.go:65] Loading cluster: no-preload-877000
	I1001 12:41:52.853695    6335 config.go:182] Loaded profile config "no-preload-877000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:41:52.858280    6335 out.go:177] * The control-plane node no-preload-877000 host is not running: state=Stopped
	I1001 12:41:52.861340    6335 out.go:177]   To start a cluster, run: "minikube start -p no-preload-877000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-877000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-877000 -n no-preload-877000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-877000 -n no-preload-877000: exit status 7 (29.249958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-877000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-877000 -n no-preload-877000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-877000 -n no-preload-877000: exit status 7 (29.164125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-877000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-402000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-402000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.911016167s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-402000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-402000" primary control-plane node in "default-k8s-diff-port-402000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-402000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:41:53.272546    6359 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:41:53.272656    6359 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:41:53.272660    6359 out.go:358] Setting ErrFile to fd 2...
	I1001 12:41:53.272662    6359 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:41:53.272777    6359 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:41:53.273877    6359 out.go:352] Setting JSON to false
	I1001 12:41:53.289703    6359 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4278,"bootTime":1727807435,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1001 12:41:53.289766    6359 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 12:41:53.294336    6359 out.go:177] * [default-k8s-diff-port-402000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 12:41:53.301508    6359 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 12:41:53.301577    6359 notify.go:220] Checking for updates...
	I1001 12:41:53.309304    6359 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	I1001 12:41:53.312304    6359 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 12:41:53.315334    6359 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 12:41:53.318361    6359 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	I1001 12:41:53.321352    6359 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 12:41:53.324682    6359 config.go:182] Loaded profile config "embed-certs-044000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:41:53.324740    6359 config.go:182] Loaded profile config "multinode-301000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:41:53.324785    6359 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 12:41:53.329302    6359 out.go:177] * Using the qemu2 driver based on user configuration
	I1001 12:41:53.336350    6359 start.go:297] selected driver: qemu2
	I1001 12:41:53.336356    6359 start.go:901] validating driver "qemu2" against <nil>
	I1001 12:41:53.336362    6359 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 12:41:53.338487    6359 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 12:41:53.341331    6359 out.go:177] * Automatically selected the socket_vmnet network
	I1001 12:41:53.342965    6359 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 12:41:53.342982    6359 cni.go:84] Creating CNI manager for ""
	I1001 12:41:53.343004    6359 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 12:41:53.343010    6359 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1001 12:41:53.343038    6359 start.go:340] cluster config:
	{Name:default-k8s-diff-port-402000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-402000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 12:41:53.346583    6359 iso.go:125] acquiring lock: {Name:mk749d3a5db31c259cbd6465e91cf5073e7cc750 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:41:53.354320    6359 out.go:177] * Starting "default-k8s-diff-port-402000" primary control-plane node in "default-k8s-diff-port-402000" cluster
	I1001 12:41:53.358281    6359 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 12:41:53.358298    6359 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1001 12:41:53.358308    6359 cache.go:56] Caching tarball of preloaded images
	I1001 12:41:53.358381    6359 preload.go:172] Found /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 12:41:53.358387    6359 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1001 12:41:53.358458    6359 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/default-k8s-diff-port-402000/config.json ...
	I1001 12:41:53.358476    6359 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/default-k8s-diff-port-402000/config.json: {Name:mk77a33034dfc0c12620c82e91399a3f2bff9221 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 12:41:53.358740    6359 start.go:360] acquireMachinesLock for default-k8s-diff-port-402000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:41:53.358780    6359 start.go:364] duration metric: took 31.125µs to acquireMachinesLock for "default-k8s-diff-port-402000"
	I1001 12:41:53.358796    6359 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-402000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubern
etesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-402000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 12:41:53.358837    6359 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 12:41:53.363336    6359 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 12:41:53.382158    6359 start.go:159] libmachine.API.Create for "default-k8s-diff-port-402000" (driver="qemu2")
	I1001 12:41:53.382197    6359 client.go:168] LocalClient.Create starting
	I1001 12:41:53.382265    6359 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem
	I1001 12:41:53.382299    6359 main.go:141] libmachine: Decoding PEM data...
	I1001 12:41:53.382310    6359 main.go:141] libmachine: Parsing certificate...
	I1001 12:41:53.382349    6359 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/cert.pem
	I1001 12:41:53.382373    6359 main.go:141] libmachine: Decoding PEM data...
	I1001 12:41:53.382380    6359 main.go:141] libmachine: Parsing certificate...
	I1001 12:41:53.382750    6359 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19736-1073/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 12:41:53.547793    6359 main.go:141] libmachine: Creating SSH key...
	I1001 12:41:53.589389    6359 main.go:141] libmachine: Creating Disk image...
	I1001 12:41:53.589394    6359 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 12:41:53.589577    6359 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/default-k8s-diff-port-402000/disk.qcow2.raw /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/default-k8s-diff-port-402000/disk.qcow2
	I1001 12:41:53.599056    6359 main.go:141] libmachine: STDOUT: 
	I1001 12:41:53.599072    6359 main.go:141] libmachine: STDERR: 
	I1001 12:41:53.599132    6359 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/default-k8s-diff-port-402000/disk.qcow2 +20000M
	I1001 12:41:53.607180    6359 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 12:41:53.607194    6359 main.go:141] libmachine: STDERR: 
	I1001 12:41:53.607214    6359 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/default-k8s-diff-port-402000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/default-k8s-diff-port-402000/disk.qcow2
	I1001 12:41:53.607222    6359 main.go:141] libmachine: Starting QEMU VM...
	I1001 12:41:53.607235    6359 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:41:53.607263    6359 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/default-k8s-diff-port-402000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/default-k8s-diff-port-402000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/default-k8s-diff-port-402000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:ab:33:e7:2e:06 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/default-k8s-diff-port-402000/disk.qcow2
	I1001 12:41:53.608871    6359 main.go:141] libmachine: STDOUT: 
	I1001 12:41:53.608885    6359 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:41:53.608904    6359 client.go:171] duration metric: took 226.707916ms to LocalClient.Create
	I1001 12:41:55.611112    6359 start.go:128] duration metric: took 2.25231075s to createHost
	I1001 12:41:55.611232    6359 start.go:83] releasing machines lock for "default-k8s-diff-port-402000", held for 2.252461542s
	W1001 12:41:55.611291    6359 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:41:55.625441    6359 out.go:177] * Deleting "default-k8s-diff-port-402000" in qemu2 ...
	W1001 12:41:55.662940    6359 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:41:55.662996    6359 start.go:729] Will try again in 5 seconds ...
	I1001 12:42:00.665116    6359 start.go:360] acquireMachinesLock for default-k8s-diff-port-402000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:42:00.665554    6359 start.go:364] duration metric: took 343.75µs to acquireMachinesLock for "default-k8s-diff-port-402000"
	I1001 12:42:00.665714    6359 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-402000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubern
etesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-402000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 12:42:00.666022    6359 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 12:42:00.675619    6359 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 12:42:00.724663    6359 start.go:159] libmachine.API.Create for "default-k8s-diff-port-402000" (driver="qemu2")
	I1001 12:42:00.724719    6359 client.go:168] LocalClient.Create starting
	I1001 12:42:00.724842    6359 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem
	I1001 12:42:00.724904    6359 main.go:141] libmachine: Decoding PEM data...
	I1001 12:42:00.724923    6359 main.go:141] libmachine: Parsing certificate...
	I1001 12:42:00.725007    6359 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/cert.pem
	I1001 12:42:00.725053    6359 main.go:141] libmachine: Decoding PEM data...
	I1001 12:42:00.725067    6359 main.go:141] libmachine: Parsing certificate...
	I1001 12:42:00.726119    6359 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19736-1073/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 12:42:00.912491    6359 main.go:141] libmachine: Creating SSH key...
	I1001 12:42:01.087576    6359 main.go:141] libmachine: Creating Disk image...
	I1001 12:42:01.087582    6359 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 12:42:01.087787    6359 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/default-k8s-diff-port-402000/disk.qcow2.raw /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/default-k8s-diff-port-402000/disk.qcow2
	I1001 12:42:01.097110    6359 main.go:141] libmachine: STDOUT: 
	I1001 12:42:01.097130    6359 main.go:141] libmachine: STDERR: 
	I1001 12:42:01.097201    6359 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/default-k8s-diff-port-402000/disk.qcow2 +20000M
	I1001 12:42:01.104949    6359 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 12:42:01.104965    6359 main.go:141] libmachine: STDERR: 
	I1001 12:42:01.104978    6359 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/default-k8s-diff-port-402000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/default-k8s-diff-port-402000/disk.qcow2
	I1001 12:42:01.104983    6359 main.go:141] libmachine: Starting QEMU VM...
	I1001 12:42:01.104993    6359 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:42:01.105026    6359 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/default-k8s-diff-port-402000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/default-k8s-diff-port-402000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/default-k8s-diff-port-402000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:f0:3a:a9:3d:ce -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/default-k8s-diff-port-402000/disk.qcow2
	I1001 12:42:01.106628    6359 main.go:141] libmachine: STDOUT: 
	I1001 12:42:01.106644    6359 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:42:01.106657    6359 client.go:171] duration metric: took 381.941583ms to LocalClient.Create
	I1001 12:42:03.108803    6359 start.go:128] duration metric: took 2.442784625s to createHost
	I1001 12:42:03.108988    6359 start.go:83] releasing machines lock for "default-k8s-diff-port-402000", held for 2.443371083s
	W1001 12:42:03.109333    6359 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-402000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-402000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:42:03.118951    6359 out.go:201] 
	W1001 12:42:03.131030    6359 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 12:42:03.131069    6359 out.go:270] * 
	* 
	W1001 12:42:03.133580    6359 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 12:42:03.141970    6359 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-402000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-402000 -n default-k8s-diff-port-402000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-402000 -n default-k8s-diff-port-402000: exit status 7 (67.398917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-402000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-044000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-044000 -n embed-certs-044000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-044000 -n embed-certs-044000: exit status 7 (31.70075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-044000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-044000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-044000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-044000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.8755ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-044000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-044000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-044000 -n embed-certs-044000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-044000 -n embed-certs-044000: exit status 7 (28.875625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-044000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-044000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-044000 -n embed-certs-044000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-044000 -n embed-certs-044000: exit status 7 (28.900667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-044000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-044000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-044000 --alsologtostderr -v=1: exit status 83 (40.57725ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-044000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-044000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:41:56.404949    6381 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:41:56.405110    6381 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:41:56.405113    6381 out.go:358] Setting ErrFile to fd 2...
	I1001 12:41:56.405116    6381 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:41:56.405227    6381 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:41:56.405445    6381 out.go:352] Setting JSON to false
	I1001 12:41:56.405453    6381 mustload.go:65] Loading cluster: embed-certs-044000
	I1001 12:41:56.405668    6381 config.go:182] Loaded profile config "embed-certs-044000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:41:56.409206    6381 out.go:177] * The control-plane node embed-certs-044000 host is not running: state=Stopped
	I1001 12:41:56.413172    6381 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-044000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-044000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-044000 -n embed-certs-044000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-044000 -n embed-certs-044000: exit status 7 (29.349791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-044000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-044000 -n embed-certs-044000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-044000 -n embed-certs-044000: exit status 7 (29.5785ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-044000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (10.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-200000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-200000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (10.005070667s)

                                                
                                                
-- stdout --
	* [newest-cni-200000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-200000" primary control-plane node in "newest-cni-200000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-200000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:41:56.722403    6398 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:41:56.722541    6398 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:41:56.722544    6398 out.go:358] Setting ErrFile to fd 2...
	I1001 12:41:56.722547    6398 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:41:56.722682    6398 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:41:56.723714    6398 out.go:352] Setting JSON to false
	I1001 12:41:56.739519    6398 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4281,"bootTime":1727807435,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1001 12:41:56.739581    6398 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 12:41:56.744149    6398 out.go:177] * [newest-cni-200000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 12:41:56.752211    6398 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 12:41:56.752256    6398 notify.go:220] Checking for updates...
	I1001 12:41:56.759256    6398 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	I1001 12:41:56.762138    6398 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 12:41:56.765179    6398 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 12:41:56.768192    6398 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	I1001 12:41:56.771207    6398 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 12:41:56.774530    6398 config.go:182] Loaded profile config "default-k8s-diff-port-402000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:41:56.774592    6398 config.go:182] Loaded profile config "multinode-301000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:41:56.774644    6398 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 12:41:56.779177    6398 out.go:177] * Using the qemu2 driver based on user configuration
	I1001 12:41:56.786164    6398 start.go:297] selected driver: qemu2
	I1001 12:41:56.786171    6398 start.go:901] validating driver "qemu2" against <nil>
	I1001 12:41:56.786177    6398 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 12:41:56.788347    6398 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W1001 12:41:56.788390    6398 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1001 12:41:56.797148    6398 out.go:177] * Automatically selected the socket_vmnet network
	I1001 12:41:56.800280    6398 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1001 12:41:56.800298    6398 cni.go:84] Creating CNI manager for ""
	I1001 12:41:56.800334    6398 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 12:41:56.800341    6398 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1001 12:41:56.800368    6398 start.go:340] cluster config:
	{Name:newest-cni-200000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-200000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 12:41:56.804416    6398 iso.go:125] acquiring lock: {Name:mk749d3a5db31c259cbd6465e91cf5073e7cc750 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:41:56.813177    6398 out.go:177] * Starting "newest-cni-200000" primary control-plane node in "newest-cni-200000" cluster
	I1001 12:41:56.817174    6398 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 12:41:56.817191    6398 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1001 12:41:56.817205    6398 cache.go:56] Caching tarball of preloaded images
	I1001 12:41:56.817283    6398 preload.go:172] Found /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 12:41:56.817289    6398 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1001 12:41:56.817358    6398 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/newest-cni-200000/config.json ...
	I1001 12:41:56.817373    6398 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/newest-cni-200000/config.json: {Name:mk3c01de5b8a6d41acf56f08549be902bbd63558 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 12:41:56.817605    6398 start.go:360] acquireMachinesLock for newest-cni-200000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:41:56.817641    6398 start.go:364] duration metric: took 29.542µs to acquireMachinesLock for "newest-cni-200000"
	I1001 12:41:56.817654    6398 start.go:93] Provisioning new machine with config: &{Name:newest-cni-200000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.31.1 ClusterName:newest-cni-200000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube
-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 12:41:56.817689    6398 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 12:41:56.825157    6398 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 12:41:56.843375    6398 start.go:159] libmachine.API.Create for "newest-cni-200000" (driver="qemu2")
	I1001 12:41:56.843406    6398 client.go:168] LocalClient.Create starting
	I1001 12:41:56.843470    6398 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem
	I1001 12:41:56.843505    6398 main.go:141] libmachine: Decoding PEM data...
	I1001 12:41:56.843514    6398 main.go:141] libmachine: Parsing certificate...
	I1001 12:41:56.843554    6398 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/cert.pem
	I1001 12:41:56.843579    6398 main.go:141] libmachine: Decoding PEM data...
	I1001 12:41:56.843588    6398 main.go:141] libmachine: Parsing certificate...
	I1001 12:41:56.844092    6398 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19736-1073/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 12:41:57.003949    6398 main.go:141] libmachine: Creating SSH key...
	I1001 12:41:57.074475    6398 main.go:141] libmachine: Creating Disk image...
	I1001 12:41:57.074482    6398 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 12:41:57.074671    6398 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/newest-cni-200000/disk.qcow2.raw /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/newest-cni-200000/disk.qcow2
	I1001 12:41:57.083692    6398 main.go:141] libmachine: STDOUT: 
	I1001 12:41:57.083721    6398 main.go:141] libmachine: STDERR: 
	I1001 12:41:57.083776    6398 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/newest-cni-200000/disk.qcow2 +20000M
	I1001 12:41:57.091619    6398 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 12:41:57.091633    6398 main.go:141] libmachine: STDERR: 
	I1001 12:41:57.091647    6398 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/newest-cni-200000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/newest-cni-200000/disk.qcow2
	I1001 12:41:57.091652    6398 main.go:141] libmachine: Starting QEMU VM...
	I1001 12:41:57.091665    6398 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:41:57.091689    6398 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/newest-cni-200000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/newest-cni-200000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/newest-cni-200000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:d2:d4:6c:f8:4c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/newest-cni-200000/disk.qcow2
	I1001 12:41:57.093294    6398 main.go:141] libmachine: STDOUT: 
	I1001 12:41:57.093307    6398 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:41:57.093329    6398 client.go:171] duration metric: took 249.923042ms to LocalClient.Create
	I1001 12:41:59.095469    6398 start.go:128] duration metric: took 2.277811458s to createHost
	I1001 12:41:59.095526    6398 start.go:83] releasing machines lock for "newest-cni-200000", held for 2.277933791s
	W1001 12:41:59.095614    6398 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:41:59.106761    6398 out.go:177] * Deleting "newest-cni-200000" in qemu2 ...
	W1001 12:41:59.150647    6398 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:41:59.150668    6398 start.go:729] Will try again in 5 seconds ...
	I1001 12:42:04.152712    6398 start.go:360] acquireMachinesLock for newest-cni-200000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:42:04.153189    6398 start.go:364] duration metric: took 367.709µs to acquireMachinesLock for "newest-cni-200000"
	I1001 12:42:04.153358    6398 start.go:93] Provisioning new machine with config: &{Name:newest-cni-200000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.31.1 ClusterName:newest-cni-200000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube
-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 12:42:04.153603    6398 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 12:42:04.159364    6398 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 12:42:04.210998    6398 start.go:159] libmachine.API.Create for "newest-cni-200000" (driver="qemu2")
	I1001 12:42:04.211062    6398 client.go:168] LocalClient.Create starting
	I1001 12:42:04.211210    6398 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/ca.pem
	I1001 12:42:04.211261    6398 main.go:141] libmachine: Decoding PEM data...
	I1001 12:42:04.211280    6398 main.go:141] libmachine: Parsing certificate...
	I1001 12:42:04.211355    6398 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19736-1073/.minikube/certs/cert.pem
	I1001 12:42:04.211386    6398 main.go:141] libmachine: Decoding PEM data...
	I1001 12:42:04.211404    6398 main.go:141] libmachine: Parsing certificate...
	I1001 12:42:04.211953    6398 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19736-1073/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 12:42:04.420455    6398 main.go:141] libmachine: Creating SSH key...
	I1001 12:42:04.625897    6398 main.go:141] libmachine: Creating Disk image...
	I1001 12:42:04.625906    6398 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 12:42:04.626141    6398 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/newest-cni-200000/disk.qcow2.raw /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/newest-cni-200000/disk.qcow2
	I1001 12:42:04.635843    6398 main.go:141] libmachine: STDOUT: 
	I1001 12:42:04.635870    6398 main.go:141] libmachine: STDERR: 
	I1001 12:42:04.635927    6398 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/newest-cni-200000/disk.qcow2 +20000M
	I1001 12:42:04.644061    6398 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 12:42:04.644082    6398 main.go:141] libmachine: STDERR: 
	I1001 12:42:04.644095    6398 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/newest-cni-200000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/newest-cni-200000/disk.qcow2
	I1001 12:42:04.644102    6398 main.go:141] libmachine: Starting QEMU VM...
	I1001 12:42:04.644108    6398 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:42:04.644143    6398 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/newest-cni-200000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/newest-cni-200000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/newest-cni-200000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:bd:8d:82:49:7f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/newest-cni-200000/disk.qcow2
	I1001 12:42:04.645797    6398 main.go:141] libmachine: STDOUT: 
	I1001 12:42:04.645812    6398 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:42:04.645824    6398 client.go:171] duration metric: took 434.767667ms to LocalClient.Create
	I1001 12:42:06.647967    6398 start.go:128] duration metric: took 2.494376167s to createHost
	I1001 12:42:06.648040    6398 start.go:83] releasing machines lock for "newest-cni-200000", held for 2.494889625s
	W1001 12:42:06.648362    6398 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-200000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-200000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:42:06.656098    6398 out.go:201] 
	W1001 12:42:06.667145    6398 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 12:42:06.667177    6398 out.go:270] * 
	* 
	W1001 12:42:06.669778    6398 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 12:42:06.681057    6398 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-200000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-200000 -n newest-cni-200000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-200000 -n newest-cni-200000: exit status 7 (61.810875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-200000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (10.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-402000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-402000 create -f testdata/busybox.yaml: exit status 1 (30.330542ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-402000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-402000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-402000 -n default-k8s-diff-port-402000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-402000 -n default-k8s-diff-port-402000: exit status 7 (29.520458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-402000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-402000 -n default-k8s-diff-port-402000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-402000 -n default-k8s-diff-port-402000: exit status 7 (28.742417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-402000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-402000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-402000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-402000 describe deploy/metrics-server -n kube-system: exit status 1 (26.591666ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-402000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-402000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-402000 -n default-k8s-diff-port-402000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-402000 -n default-k8s-diff-port-402000: exit status 7 (29.204792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-402000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-402000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-402000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (6.266162041s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-402000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-402000" primary control-plane node in "default-k8s-diff-port-402000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-402000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-402000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:42:05.502776    6446 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:42:05.502894    6446 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:42:05.502899    6446 out.go:358] Setting ErrFile to fd 2...
	I1001 12:42:05.502902    6446 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:42:05.503022    6446 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:42:05.504022    6446 out.go:352] Setting JSON to false
	I1001 12:42:05.521262    6446 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4290,"bootTime":1727807435,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1001 12:42:05.521338    6446 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 12:42:05.526034    6446 out.go:177] * [default-k8s-diff-port-402000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 12:42:05.532995    6446 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 12:42:05.533048    6446 notify.go:220] Checking for updates...
	I1001 12:42:05.538025    6446 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	I1001 12:42:05.541015    6446 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 12:42:05.542544    6446 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 12:42:05.545996    6446 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	I1001 12:42:05.549046    6446 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 12:42:05.552298    6446 config.go:182] Loaded profile config "default-k8s-diff-port-402000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:42:05.552566    6446 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 12:42:05.557002    6446 out.go:177] * Using the qemu2 driver based on existing profile
	I1001 12:42:05.563986    6446 start.go:297] selected driver: qemu2
	I1001 12:42:05.563993    6446 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-402000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-402000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:
false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 12:42:05.564068    6446 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 12:42:05.566510    6446 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 12:42:05.566535    6446 cni.go:84] Creating CNI manager for ""
	I1001 12:42:05.566564    6446 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 12:42:05.566588    6446 start.go:340] cluster config:
	{Name:default-k8s-diff-port-402000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-402000 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 12:42:05.570378    6446 iso.go:125] acquiring lock: {Name:mk749d3a5db31c259cbd6465e91cf5073e7cc750 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:42:05.578005    6446 out.go:177] * Starting "default-k8s-diff-port-402000" primary control-plane node in "default-k8s-diff-port-402000" cluster
	I1001 12:42:05.581961    6446 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 12:42:05.581977    6446 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1001 12:42:05.581987    6446 cache.go:56] Caching tarball of preloaded images
	I1001 12:42:05.582056    6446 preload.go:172] Found /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 12:42:05.582062    6446 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1001 12:42:05.582136    6446 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/default-k8s-diff-port-402000/config.json ...
	I1001 12:42:05.582649    6446 start.go:360] acquireMachinesLock for default-k8s-diff-port-402000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:42:06.648229    6446 start.go:364] duration metric: took 1.065518167s to acquireMachinesLock for "default-k8s-diff-port-402000"
	I1001 12:42:06.648339    6446 start.go:96] Skipping create...Using existing machine configuration
	I1001 12:42:06.648371    6446 fix.go:54] fixHost starting: 
	I1001 12:42:06.649047    6446 fix.go:112] recreateIfNeeded on default-k8s-diff-port-402000: state=Stopped err=<nil>
	W1001 12:42:06.649085    6446 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 12:42:06.664007    6446 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-402000" ...
	I1001 12:42:06.670073    6446 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:42:06.670275    6446 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/default-k8s-diff-port-402000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/default-k8s-diff-port-402000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/default-k8s-diff-port-402000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:f0:3a:a9:3d:ce -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/default-k8s-diff-port-402000/disk.qcow2
	I1001 12:42:06.679604    6446 main.go:141] libmachine: STDOUT: 
	I1001 12:42:06.679665    6446 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:42:06.679785    6446 fix.go:56] duration metric: took 31.422333ms for fixHost
	I1001 12:42:06.679801    6446 start.go:83] releasing machines lock for "default-k8s-diff-port-402000", held for 31.525459ms
	W1001 12:42:06.679836    6446 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 12:42:06.679983    6446 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:42:06.679999    6446 start.go:729] Will try again in 5 seconds ...
	I1001 12:42:11.682137    6446 start.go:360] acquireMachinesLock for default-k8s-diff-port-402000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:42:11.682705    6446 start.go:364] duration metric: took 427.25µs to acquireMachinesLock for "default-k8s-diff-port-402000"
	I1001 12:42:11.682780    6446 start.go:96] Skipping create...Using existing machine configuration
	I1001 12:42:11.682800    6446 fix.go:54] fixHost starting: 
	I1001 12:42:11.683562    6446 fix.go:112] recreateIfNeeded on default-k8s-diff-port-402000: state=Stopped err=<nil>
	W1001 12:42:11.683589    6446 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 12:42:11.693058    6446 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-402000" ...
	I1001 12:42:11.697087    6446 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:42:11.697276    6446 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/default-k8s-diff-port-402000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/default-k8s-diff-port-402000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/default-k8s-diff-port-402000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:f0:3a:a9:3d:ce -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/default-k8s-diff-port-402000/disk.qcow2
	I1001 12:42:11.706687    6446 main.go:141] libmachine: STDOUT: 
	I1001 12:42:11.706741    6446 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:42:11.706828    6446 fix.go:56] duration metric: took 24.028917ms for fixHost
	I1001 12:42:11.706844    6446 start.go:83] releasing machines lock for "default-k8s-diff-port-402000", held for 24.11375ms
	W1001 12:42:11.707049    6446 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-402000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-402000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:42:11.712996    6446 out.go:201] 
	W1001 12:42:11.717197    6446 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 12:42:11.717224    6446 out.go:270] * 
	* 
	W1001 12:42:11.720552    6446 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 12:42:11.728145    6446 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-402000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-402000 -n default-k8s-diff-port-402000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-402000 -n default-k8s-diff-port-402000: exit status 7 (67.002042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-402000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-200000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-200000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.186482833s)

                                                
                                                
-- stdout --
	* [newest-cni-200000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-200000" primary control-plane node in "newest-cni-200000" cluster
	* Restarting existing qemu2 VM for "newest-cni-200000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-200000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:42:10.545603    6481 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:42:10.545738    6481 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:42:10.545742    6481 out.go:358] Setting ErrFile to fd 2...
	I1001 12:42:10.545745    6481 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:42:10.545861    6481 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:42:10.546867    6481 out.go:352] Setting JSON to false
	I1001 12:42:10.562928    6481 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4295,"bootTime":1727807435,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1001 12:42:10.562998    6481 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 12:42:10.568247    6481 out.go:177] * [newest-cni-200000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 12:42:10.575339    6481 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 12:42:10.575387    6481 notify.go:220] Checking for updates...
	I1001 12:42:10.582230    6481 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	I1001 12:42:10.585282    6481 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 12:42:10.588245    6481 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 12:42:10.591242    6481 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	I1001 12:42:10.594286    6481 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 12:42:10.597550    6481 config.go:182] Loaded profile config "newest-cni-200000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:42:10.597815    6481 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 12:42:10.602199    6481 out.go:177] * Using the qemu2 driver based on existing profile
	I1001 12:42:10.609167    6481 start.go:297] selected driver: qemu2
	I1001 12:42:10.609173    6481 start.go:901] validating driver "qemu2" against &{Name:newest-cni-200000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:newest-cni-200000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Li
stenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 12:42:10.609218    6481 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 12:42:10.611529    6481 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1001 12:42:10.611554    6481 cni.go:84] Creating CNI manager for ""
	I1001 12:42:10.611580    6481 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 12:42:10.611610    6481 start.go:340] cluster config:
	{Name:newest-cni-200000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-200000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 12:42:10.615122    6481 iso.go:125] acquiring lock: {Name:mk749d3a5db31c259cbd6465e91cf5073e7cc750 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 12:42:10.623069    6481 out.go:177] * Starting "newest-cni-200000" primary control-plane node in "newest-cni-200000" cluster
	I1001 12:42:10.627226    6481 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 12:42:10.627244    6481 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1001 12:42:10.627257    6481 cache.go:56] Caching tarball of preloaded images
	I1001 12:42:10.627336    6481 preload.go:172] Found /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 12:42:10.627349    6481 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1001 12:42:10.627419    6481 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/newest-cni-200000/config.json ...
	I1001 12:42:10.627914    6481 start.go:360] acquireMachinesLock for newest-cni-200000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:42:10.627942    6481 start.go:364] duration metric: took 22.209µs to acquireMachinesLock for "newest-cni-200000"
	I1001 12:42:10.627951    6481 start.go:96] Skipping create...Using existing machine configuration
	I1001 12:42:10.627956    6481 fix.go:54] fixHost starting: 
	I1001 12:42:10.628080    6481 fix.go:112] recreateIfNeeded on newest-cni-200000: state=Stopped err=<nil>
	W1001 12:42:10.628089    6481 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 12:42:10.632172    6481 out.go:177] * Restarting existing qemu2 VM for "newest-cni-200000" ...
	I1001 12:42:10.640301    6481 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:42:10.640352    6481 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/newest-cni-200000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/newest-cni-200000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/newest-cni-200000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:bd:8d:82:49:7f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/newest-cni-200000/disk.qcow2
	I1001 12:42:10.642378    6481 main.go:141] libmachine: STDOUT: 
	I1001 12:42:10.642396    6481 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:42:10.642428    6481 fix.go:56] duration metric: took 14.471083ms for fixHost
	I1001 12:42:10.642432    6481 start.go:83] releasing machines lock for "newest-cni-200000", held for 14.485625ms
	W1001 12:42:10.642439    6481 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 12:42:10.642471    6481 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:42:10.642476    6481 start.go:729] Will try again in 5 seconds ...
	I1001 12:42:15.644600    6481 start.go:360] acquireMachinesLock for newest-cni-200000: {Name:mkd586965df96a03f39b47bcb5cf7ca52d8147db Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 12:42:15.645019    6481 start.go:364] duration metric: took 312.208µs to acquireMachinesLock for "newest-cni-200000"
	I1001 12:42:15.645148    6481 start.go:96] Skipping create...Using existing machine configuration
	I1001 12:42:15.645168    6481 fix.go:54] fixHost starting: 
	I1001 12:42:15.645886    6481 fix.go:112] recreateIfNeeded on newest-cni-200000: state=Stopped err=<nil>
	W1001 12:42:15.645914    6481 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 12:42:15.656408    6481 out.go:177] * Restarting existing qemu2 VM for "newest-cni-200000" ...
	I1001 12:42:15.660422    6481 qemu.go:418] Using hvf for hardware acceleration
	I1001 12:42:15.660634    6481 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/newest-cni-200000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/newest-cni-200000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/newest-cni-200000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:bd:8d:82:49:7f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19736-1073/.minikube/machines/newest-cni-200000/disk.qcow2
	I1001 12:42:15.670478    6481 main.go:141] libmachine: STDOUT: 
	I1001 12:42:15.670564    6481 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 12:42:15.670664    6481 fix.go:56] duration metric: took 25.497792ms for fixHost
	I1001 12:42:15.670678    6481 start.go:83] releasing machines lock for "newest-cni-200000", held for 25.638459ms
	W1001 12:42:15.670851    6481 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-200000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-200000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 12:42:15.678364    6481 out.go:201] 
	W1001 12:42:15.682441    6481 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 12:42:15.682476    6481 out.go:270] * 
	* 
	W1001 12:42:15.685015    6481 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 12:42:15.694326    6481 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-200000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-200000 -n newest-cni-200000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-200000 -n newest-cni-200000: exit status 7 (66.00725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-200000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-402000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-402000 -n default-k8s-diff-port-402000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-402000 -n default-k8s-diff-port-402000: exit status 7 (32.245125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-402000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-402000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-402000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-402000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.9435ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-402000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-402000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-402000 -n default-k8s-diff-port-402000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-402000 -n default-k8s-diff-port-402000: exit status 7 (28.7045ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-402000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-402000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-402000 -n default-k8s-diff-port-402000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-402000 -n default-k8s-diff-port-402000: exit status 7 (29.269833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-402000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-402000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-402000 --alsologtostderr -v=1: exit status 83 (39.471542ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-402000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-402000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:42:11.993469    6502 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:42:11.993619    6502 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:42:11.993622    6502 out.go:358] Setting ErrFile to fd 2...
	I1001 12:42:11.993624    6502 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:42:11.993765    6502 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:42:11.993961    6502 out.go:352] Setting JSON to false
	I1001 12:42:11.993972    6502 mustload.go:65] Loading cluster: default-k8s-diff-port-402000
	I1001 12:42:11.994199    6502 config.go:182] Loaded profile config "default-k8s-diff-port-402000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:42:11.998298    6502 out.go:177] * The control-plane node default-k8s-diff-port-402000 host is not running: state=Stopped
	I1001 12:42:12.002205    6502 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-402000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-402000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-402000 -n default-k8s-diff-port-402000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-402000 -n default-k8s-diff-port-402000: exit status 7 (29.17425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-402000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-402000 -n default-k8s-diff-port-402000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-402000 -n default-k8s-diff-port-402000: exit status 7 (28.472875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-402000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-200000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-200000 -n newest-cni-200000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-200000 -n newest-cni-200000: exit status 7 (29.91925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-200000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-200000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-200000 --alsologtostderr -v=1: exit status 83 (41.840167ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-200000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-200000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:42:15.875067    6529 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:42:15.875223    6529 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:42:15.875225    6529 out.go:358] Setting ErrFile to fd 2...
	I1001 12:42:15.875228    6529 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:42:15.875361    6529 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:42:15.875573    6529 out.go:352] Setting JSON to false
	I1001 12:42:15.875581    6529 mustload.go:65] Loading cluster: newest-cni-200000
	I1001 12:42:15.875815    6529 config.go:182] Loaded profile config "newest-cni-200000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:42:15.879393    6529 out.go:177] * The control-plane node newest-cni-200000 host is not running: state=Stopped
	I1001 12:42:15.883244    6529 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-200000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-200000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-200000 -n newest-cni-200000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-200000 -n newest-cni-200000: exit status 7 (29.87775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-200000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-200000 -n newest-cni-200000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-200000 -n newest-cni-200000: exit status 7 (29.644167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-200000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (153/273)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.31.1/json-events 18.56
13 TestDownloadOnly/v1.31.1/preload-exists 0
16 TestDownloadOnly/v1.31.1/kubectl 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.07
18 TestDownloadOnly/v1.31.1/DeleteAll 0.1
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.1
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 212.29
29 TestAddons/serial/Volcano 38.01
31 TestAddons/serial/GCPAuth/Namespaces 0.09
33 TestAddons/parallel/Registry 18.82
34 TestAddons/parallel/Ingress 17.06
35 TestAddons/parallel/InspektorGadget 11.24
36 TestAddons/parallel/MetricsServer 5.25
38 TestAddons/parallel/CSI 50.32
39 TestAddons/parallel/Headlamp 16.65
40 TestAddons/parallel/CloudSpanner 5.16
41 TestAddons/parallel/LocalPath 40.89
42 TestAddons/parallel/NvidiaDevicePlugin 5.19
43 TestAddons/parallel/Yakd 10.24
44 TestAddons/StoppedEnableDisable 9.41
52 TestHyperKitDriverInstallOrUpdate 11.06
55 TestErrorSpam/setup 34.95
56 TestErrorSpam/start 0.35
57 TestErrorSpam/status 0.23
58 TestErrorSpam/pause 0.69
59 TestErrorSpam/unpause 0.65
60 TestErrorSpam/stop 64.27
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 77.12
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 38.1
67 TestFunctional/serial/KubeContext 0.03
68 TestFunctional/serial/KubectlGetPods 0.04
71 TestFunctional/serial/CacheCmd/cache/add_remote 9.34
72 TestFunctional/serial/CacheCmd/cache/add_local 1.15
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
74 TestFunctional/serial/CacheCmd/cache/list 0.03
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.08
76 TestFunctional/serial/CacheCmd/cache/cache_reload 2.2
77 TestFunctional/serial/CacheCmd/cache/delete 0.07
78 TestFunctional/serial/MinikubeKubectlCmd 2.28
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.02
80 TestFunctional/serial/ExtraConfig 38.9
81 TestFunctional/serial/ComponentHealth 0.04
82 TestFunctional/serial/LogsCmd 0.65
83 TestFunctional/serial/LogsFileCmd 0.6
84 TestFunctional/serial/InvalidService 4.43
86 TestFunctional/parallel/ConfigCmd 0.23
87 TestFunctional/parallel/DashboardCmd 8.96
88 TestFunctional/parallel/DryRun 0.24
89 TestFunctional/parallel/InternationalLanguage 0.12
90 TestFunctional/parallel/StatusCmd 0.26
95 TestFunctional/parallel/AddonsCmd 0.18
96 TestFunctional/parallel/PersistentVolumeClaim 25.96
98 TestFunctional/parallel/SSHCmd 0.14
99 TestFunctional/parallel/CpCmd 0.45
101 TestFunctional/parallel/FileSync 0.11
102 TestFunctional/parallel/CertSync 0.42
106 TestFunctional/parallel/NodeLabels 0.04
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.07
110 TestFunctional/parallel/License 1.34
112 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.22
113 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
115 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.1
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
118 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
119 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.03
120 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
121 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
122 TestFunctional/parallel/ServiceCmd/DeployApp 6.09
123 TestFunctional/parallel/ServiceCmd/List 0.32
124 TestFunctional/parallel/ServiceCmd/JSONOutput 0.29
125 TestFunctional/parallel/ServiceCmd/HTTPS 0.13
126 TestFunctional/parallel/ServiceCmd/Format 0.1
127 TestFunctional/parallel/ServiceCmd/URL 0.1
128 TestFunctional/parallel/ProfileCmd/profile_not_create 0.14
129 TestFunctional/parallel/ProfileCmd/profile_list 0.14
130 TestFunctional/parallel/ProfileCmd/profile_json_output 0.13
131 TestFunctional/parallel/MountCmd/any-port 10.39
132 TestFunctional/parallel/MountCmd/specific-port 0.77
133 TestFunctional/parallel/MountCmd/VerifyCleanup 1.04
134 TestFunctional/parallel/Version/short 0.04
135 TestFunctional/parallel/Version/components 0.15
136 TestFunctional/parallel/ImageCommands/ImageListShort 0.08
137 TestFunctional/parallel/ImageCommands/ImageListTable 0.08
138 TestFunctional/parallel/ImageCommands/ImageListJson 0.08
139 TestFunctional/parallel/ImageCommands/ImageListYaml 0.07
140 TestFunctional/parallel/ImageCommands/ImageBuild 4.72
141 TestFunctional/parallel/ImageCommands/Setup 1.72
142 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.71
143 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.38
144 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.16
145 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.13
146 TestFunctional/parallel/ImageCommands/ImageRemove 0.15
147 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.23
148 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.27
149 TestFunctional/parallel/DockerEnv/bash 0.31
150 TestFunctional/parallel/UpdateContextCmd/no_changes 0.06
151 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
152 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
153 TestFunctional/delete_echo-server_images 0.03
154 TestFunctional/delete_my-image_image 0.01
155 TestFunctional/delete_minikube_cached_images 0.01
159 TestMultiControlPlane/serial/StartCluster 233.35
160 TestMultiControlPlane/serial/DeployApp 9.01
161 TestMultiControlPlane/serial/PingHostFromPods 0.72
162 TestMultiControlPlane/serial/AddWorkerNode 69.03
163 TestMultiControlPlane/serial/NodeLabels 0.12
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.49
165 TestMultiControlPlane/serial/CopyFile 4.08
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 1.86
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.21
211 TestMainNoArgs 0.03
258 TestStoppedBinaryUpgrade/Setup 4.7
270 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
274 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
275 TestNoKubernetes/serial/ProfileList 31.3
276 TestNoKubernetes/serial/Stop 3.76
278 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
287 TestStoppedBinaryUpgrade/MinikubeLogs 0.6
293 TestStartStop/group/old-k8s-version/serial/Stop 3.47
294 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.11
306 TestStartStop/group/no-preload/serial/Stop 3.39
307 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.13
311 TestStartStop/group/embed-certs/serial/Stop 3
312 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
328 TestStartStop/group/default-k8s-diff-port/serial/Stop 1.92
329 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
331 TestStartStop/group/newest-cni/serial/DeployApp 0
332 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
333 TestStartStop/group/newest-cni/serial/Stop 3.57
334 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
340 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1001 11:46:50.825395    1595 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I1001 11:46:50.825846    1595 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-368000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-368000: exit status 85 (92.962042ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-368000 | jenkins | v1.34.0 | 01 Oct 24 11:46 PDT |          |
	|         | -p download-only-368000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 11:46:09
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 11:46:09.291925    1596 out.go:345] Setting OutFile to fd 1 ...
	I1001 11:46:09.292083    1596 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 11:46:09.292086    1596 out.go:358] Setting ErrFile to fd 2...
	I1001 11:46:09.292089    1596 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 11:46:09.292209    1596 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	W1001 11:46:09.292302    1596 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19736-1073/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19736-1073/.minikube/config/config.json: no such file or directory
	I1001 11:46:09.293528    1596 out.go:352] Setting JSON to true
	I1001 11:46:09.310662    1596 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":934,"bootTime":1727807435,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1001 11:46:09.310726    1596 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 11:46:09.315277    1596 out.go:97] [download-only-368000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 11:46:09.315443    1596 notify.go:220] Checking for updates...
	W1001 11:46:09.315505    1596 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball: no such file or directory
	I1001 11:46:09.319015    1596 out.go:169] MINIKUBE_LOCATION=19736
	I1001 11:46:09.326102    1596 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	I1001 11:46:09.330066    1596 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 11:46:09.334013    1596 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 11:46:09.337060    1596 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	W1001 11:46:09.343017    1596 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1001 11:46:09.343294    1596 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 11:46:09.349103    1596 out.go:97] Using the qemu2 driver based on user configuration
	I1001 11:46:09.349123    1596 start.go:297] selected driver: qemu2
	I1001 11:46:09.349139    1596 start.go:901] validating driver "qemu2" against <nil>
	I1001 11:46:09.349210    1596 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 11:46:09.350879    1596 out.go:169] Automatically selected the socket_vmnet network
	I1001 11:46:09.356630    1596 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1001 11:46:09.356759    1596 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1001 11:46:09.356814    1596 cni.go:84] Creating CNI manager for ""
	I1001 11:46:09.356849    1596 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1001 11:46:09.356895    1596 start.go:340] cluster config:
	{Name:download-only-368000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-368000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 11:46:09.362247    1596 iso.go:125] acquiring lock: {Name:mk749d3a5db31c259cbd6465e91cf5073e7cc750 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 11:46:09.367081    1596 out.go:97] Downloading VM boot image ...
	I1001 11:46:09.367098    1596 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso
	I1001 11:46:27.714087    1596 out.go:97] Starting "download-only-368000" primary control-plane node in "download-only-368000" cluster
	I1001 11:46:27.714111    1596 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1001 11:46:28.005233    1596 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1001 11:46:28.005335    1596 cache.go:56] Caching tarball of preloaded images
	I1001 11:46:28.006156    1596 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1001 11:46:28.013150    1596 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1001 11:46:28.013180    1596 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1001 11:46:28.619724    1596 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1001 11:46:49.225858    1596 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1001 11:46:49.226036    1596 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1001 11:46:49.932157    1596 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1001 11:46:49.932372    1596 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/download-only-368000/config.json ...
	I1001 11:46:49.932391    1596 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/download-only-368000/config.json: {Name:mk9628911aba49ea32a809a43c6ae648f373b516 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 11:46:49.932731    1596 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1001 11:46:49.932938    1596 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I1001 11:46:50.771966    1596 out.go:193] 
	W1001 11:46:50.780030    1596 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19736-1073/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1088c96c0 0x1088c96c0 0x1088c96c0 0x1088c96c0 0x1088c96c0 0x1088c96c0 0x1088c96c0] Decompressors:map[bz2:0x1400059f710 gz:0x1400059f718 tar:0x1400059f6c0 tar.bz2:0x1400059f6d0 tar.gz:0x1400059f6e0 tar.xz:0x1400059f6f0 tar.zst:0x1400059f700 tbz2:0x1400059f6d0 tgz:0x1400059f6e0 txz:0x1400059f6f0 tzst:0x1400059f700 xz:0x1400059f720 zip:0x1400059f730 zst:0x1400059f728] Getters:map[file:0x140014b25c0 http:0x140000b8140 https:0x140000b8190] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W1001 11:46:50.780078    1596 out_reason.go:110] 
	W1001 11:46:50.788900    1596 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 11:46:50.793820    1596 out.go:193] 
	
	
	* The control-plane node download-only-368000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-368000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-368000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (18.56s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-323000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-323000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 : (18.564678333s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (18.56s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I1001 11:47:09.737765    1595 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I1001 11:47:09.737813    1595 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
--- PASS: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-323000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-323000: exit status 85 (74.103875ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-368000 | jenkins | v1.34.0 | 01 Oct 24 11:46 PDT |                     |
	|         | -p download-only-368000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 01 Oct 24 11:46 PDT | 01 Oct 24 11:46 PDT |
	| delete  | -p download-only-368000        | download-only-368000 | jenkins | v1.34.0 | 01 Oct 24 11:46 PDT | 01 Oct 24 11:46 PDT |
	| start   | -o=json --download-only        | download-only-323000 | jenkins | v1.34.0 | 01 Oct 24 11:46 PDT |                     |
	|         | -p download-only-323000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 11:46:51
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 11:46:51.198107    1627 out.go:345] Setting OutFile to fd 1 ...
	I1001 11:46:51.198229    1627 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 11:46:51.198233    1627 out.go:358] Setting ErrFile to fd 2...
	I1001 11:46:51.198235    1627 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 11:46:51.198369    1627 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 11:46:51.199409    1627 out.go:352] Setting JSON to true
	I1001 11:46:51.215390    1627 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":976,"bootTime":1727807435,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1001 11:46:51.215463    1627 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 11:46:51.219459    1627 out.go:97] [download-only-323000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 11:46:51.219553    1627 notify.go:220] Checking for updates...
	I1001 11:46:51.223488    1627 out.go:169] MINIKUBE_LOCATION=19736
	I1001 11:46:51.226615    1627 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	I1001 11:46:51.231460    1627 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 11:46:51.238499    1627 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 11:46:51.241585    1627 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	W1001 11:46:51.248566    1627 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1001 11:46:51.248773    1627 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 11:46:51.253516    1627 out.go:97] Using the qemu2 driver based on user configuration
	I1001 11:46:51.253526    1627 start.go:297] selected driver: qemu2
	I1001 11:46:51.253531    1627 start.go:901] validating driver "qemu2" against <nil>
	I1001 11:46:51.253601    1627 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 11:46:51.256528    1627 out.go:169] Automatically selected the socket_vmnet network
	I1001 11:46:51.261685    1627 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1001 11:46:51.261853    1627 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1001 11:46:51.261873    1627 cni.go:84] Creating CNI manager for ""
	I1001 11:46:51.261897    1627 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 11:46:51.261903    1627 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1001 11:46:51.261944    1627 start.go:340] cluster config:
	{Name:download-only-323000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-323000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAu
thSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 11:46:51.265344    1627 iso.go:125] acquiring lock: {Name:mk749d3a5db31c259cbd6465e91cf5073e7cc750 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 11:46:51.268616    1627 out.go:97] Starting "download-only-323000" primary control-plane node in "download-only-323000" cluster
	I1001 11:46:51.268625    1627 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 11:46:51.892569    1627 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1001 11:46:51.892669    1627 cache.go:56] Caching tarball of preloaded images
	I1001 11:46:51.893481    1627 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 11:46:51.899031    1627 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I1001 11:46:51.899054    1627 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I1001 11:46:52.462112    1627 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4?checksum=md5:402f69b5e09ccb1e1dbe401b4cdd104d -> /Users/jenkins/minikube-integration/19736-1073/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-323000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-323000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-323000
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:932: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-075000
addons_test.go:932: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-075000: exit status 85 (60.987667ms)

                                                
                                                
-- stdout --
	* Profile "addons-075000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-075000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:943: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-075000
addons_test.go:943: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-075000: exit status 85 (57.007834ms)

                                                
                                                
-- stdout --
	* Profile "addons-075000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-075000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (212.29s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-075000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-darwin-arm64 start -p addons-075000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m32.287721583s)
--- PASS: TestAddons/Setup (212.29s)

                                                
                                    
x
+
TestAddons/serial/Volcano (38.01s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:817: volcano-controller stabilized in 9.000666ms
addons_test.go:801: volcano-scheduler stabilized in 9.036375ms
addons_test.go:809: volcano-admission stabilized in 9.079833ms
addons_test.go:823: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-6c9778cbdf-ckpfc" [433b64a1-9f8f-47f2-bee9-655f18803ef6] Running
addons_test.go:823: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.006137458s
addons_test.go:827: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5874dfdd79-jvfd4" [74558e45-b6cc-4afe-ac5a-ae19240fe113] Running
addons_test.go:827: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.00672275s
addons_test.go:831: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-789ffc5785-lgjds" [4bdb5a44-6464-496a-b88f-f380b70ef8a5] Running
addons_test.go:831: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.008931417s
addons_test.go:836: (dbg) Run:  kubectl --context addons-075000 delete -n volcano-system job volcano-admission-init
addons_test.go:842: (dbg) Run:  kubectl --context addons-075000 create -f testdata/vcjob.yaml
addons_test.go:850: (dbg) Run:  kubectl --context addons-075000 get vcjob -n my-volcano
addons_test.go:868: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [5c48efcd-c005-43dd-83ea-e0128bb52838] Pending
helpers_test.go:344: "test-job-nginx-0" [5c48efcd-c005-43dd-83ea-e0128bb52838] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [5c48efcd-c005-43dd-83ea-e0128bb52838] Running
addons_test.go:868: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.004372667s
addons_test.go:977: (dbg) Run:  out/minikube-darwin-arm64 -p addons-075000 addons disable volcano --alsologtostderr -v=1
addons_test.go:977: (dbg) Done: out/minikube-darwin-arm64 -p addons-075000 addons disable volcano --alsologtostderr -v=1: (10.722473541s)
--- PASS: TestAddons/serial/Volcano (38.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.09s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-075000 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-075000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.09s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 1.44475ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-z8zv5" [16c01a82-7f4c-4b3c-8ad2-0089ee964b91] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.005858834s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-jqcpq" [2e2147c3-a0d7-4168-9dba-190535782cb2] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005538625s
addons_test.go:331: (dbg) Run:  kubectl --context addons-075000 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-075000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-075000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (8.527100375s)
addons_test.go:350: (dbg) Run:  out/minikube-darwin-arm64 -p addons-075000 ip
2024/10/01 11:59:49 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:977: (dbg) Run:  out/minikube-darwin-arm64 -p addons-075000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.82s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (17.06s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-075000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-075000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-075000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [d137e0ce-74ea-4391-8464-efd49245c54d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [d137e0ce-74ea-4391-8464-efd49245c54d] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.009339417s
I1001 12:00:54.916015    1595 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-darwin-arm64 -p addons-075000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-075000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-darwin-arm64 -p addons-075000 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:977: (dbg) Run:  out/minikube-darwin-arm64 -p addons-075000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:977: (dbg) Done: out/minikube-darwin-arm64 -p addons-075000 addons disable ingress-dns --alsologtostderr -v=1: (1.105841042s)
addons_test.go:977: (dbg) Run:  out/minikube-darwin-arm64 -p addons-075000 addons disable ingress --alsologtostderr -v=1
addons_test.go:977: (dbg) Done: out/minikube-darwin-arm64 -p addons-075000 addons disable ingress --alsologtostderr -v=1: (7.252861791s)
--- PASS: TestAddons/parallel/Ingress (17.06s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.24s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:756: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-whqkg" [ac49d86d-3232-44bf-8715-06d24dc86a77] Running
addons_test.go:756: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005204208s
addons_test.go:977: (dbg) Run:  out/minikube-darwin-arm64 -p addons-075000 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:977: (dbg) Done: out/minikube-darwin-arm64 -p addons-075000 addons disable inspektor-gadget --alsologtostderr -v=1: (5.2295175s)
--- PASS: TestAddons/parallel/InspektorGadget (11.24s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 1.4885ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-pzv5g" [351f8d6c-1d13-4173-afae-1396dac28e86] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005197625s
addons_test.go:402: (dbg) Run:  kubectl --context addons-075000 top pods -n kube-system
addons_test.go:977: (dbg) Run:  out/minikube-darwin-arm64 -p addons-075000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.25s)

                                                
                                    
x
+
TestAddons/parallel/CSI (50.32s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1001 12:00:07.769799    1595 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1001 12:00:07.772214    1595 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1001 12:00:07.772223    1595 kapi.go:107] duration metric: took 2.459417ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 2.463125ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-075000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-075000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-075000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-075000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-075000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-075000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-075000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-075000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-075000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-075000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-075000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-075000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-075000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-075000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-075000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-075000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-075000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-075000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-075000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-075000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-075000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [a29bc464-744f-44e5-9753-21e2047f810a] Pending
helpers_test.go:344: "task-pv-pod" [a29bc464-744f-44e5-9753-21e2047f810a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [a29bc464-744f-44e5-9753-21e2047f810a] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.01016775s
addons_test.go:511: (dbg) Run:  kubectl --context addons-075000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-075000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-075000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-075000 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-075000 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-075000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-075000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-075000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-075000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-075000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-075000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-075000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-075000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-075000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-075000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [23462623-3e31-41a9-a48f-977b58d1ba05] Pending
helpers_test.go:344: "task-pv-pod-restore" [23462623-3e31-41a9-a48f-977b58d1ba05] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [23462623-3e31-41a9-a48f-977b58d1ba05] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.002436542s
addons_test.go:553: (dbg) Run:  kubectl --context addons-075000 delete pod task-pv-pod-restore
addons_test.go:553: (dbg) Done: kubectl --context addons-075000 delete pod task-pv-pod-restore: (1.008110542s)
addons_test.go:557: (dbg) Run:  kubectl --context addons-075000 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-075000 delete volumesnapshot new-snapshot-demo
addons_test.go:977: (dbg) Run:  out/minikube-darwin-arm64 -p addons-075000 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:977: (dbg) Run:  out/minikube-darwin-arm64 -p addons-075000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:977: (dbg) Done: out/minikube-darwin-arm64 -p addons-075000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.134916958s)
--- PASS: TestAddons/parallel/CSI (50.32s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:741: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-075000 --alsologtostderr -v=1
addons_test.go:746: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-tqqlc" [a1104188-5567-400c-a946-758d88242dc0] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-tqqlc" [a1104188-5567-400c-a946-758d88242dc0] Running
addons_test.go:746: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.006210375s
addons_test.go:977: (dbg) Run:  out/minikube-darwin-arm64 -p addons-075000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:977: (dbg) Done: out/minikube-darwin-arm64 -p addons-075000 addons disable headlamp --alsologtostderr -v=1: (5.307738166s)
--- PASS: TestAddons/parallel/Headlamp (16.65s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.16s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:773: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-c6cr7" [be386707-db01-4b9b-b835-131b68337b3d] Running
addons_test.go:773: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.001848708s
addons_test.go:977: (dbg) Run:  out/minikube-darwin-arm64 -p addons-075000 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.16s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (40.89s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:881: (dbg) Run:  kubectl --context addons-075000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:887: (dbg) Run:  kubectl --context addons-075000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:891: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-075000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-075000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-075000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-075000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-075000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-075000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:894: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [590b3993-4468-43ed-8098-3bcca2e12c3b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [590b3993-4468-43ed-8098-3bcca2e12c3b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [590b3993-4468-43ed-8098-3bcca2e12c3b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:894: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.00617225s
addons_test.go:899: (dbg) Run:  kubectl --context addons-075000 get pvc test-pvc -o=json
addons_test.go:908: (dbg) Run:  out/minikube-darwin-arm64 -p addons-075000 ssh "cat /opt/local-path-provisioner/pvc-0ada4c68-4bb0-4d9d-b088-b0f8f4ca5956_default_test-pvc/file1"
addons_test.go:920: (dbg) Run:  kubectl --context addons-075000 delete pod test-local-path
addons_test.go:924: (dbg) Run:  kubectl --context addons-075000 delete pvc test-pvc
addons_test.go:977: (dbg) Run:  out/minikube-darwin-arm64 -p addons-075000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:977: (dbg) Done: out/minikube-darwin-arm64 -p addons-075000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (31.367599459s)
--- PASS: TestAddons/parallel/LocalPath (40.89s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.19s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:956: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-4s6gt" [34232c72-8fc3-43da-8279-2e087c3e7215] Running
addons_test.go:956: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.010193s
addons_test.go:959: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-075000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.19s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.24s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:967: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-lwfps" [75462016-2586-4ed3-9c5e-554682592a03] Running
addons_test.go:967: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.007015917s
addons_test.go:971: (dbg) Run:  out/minikube-darwin-arm64 -p addons-075000 addons disable yakd --alsologtostderr -v=1
addons_test.go:971: (dbg) Done: out/minikube-darwin-arm64 -p addons-075000 addons disable yakd --alsologtostderr -v=1: (5.229229792s)
--- PASS: TestAddons/parallel/Yakd (10.24s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (9.41s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-075000
addons_test.go:170: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-075000: (9.220416417s)
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-075000
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-075000
addons_test.go:183: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-075000
--- PASS: TestAddons/StoppedEnableDisable (9.41s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (11.06s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
I1001 12:27:17.471193    1595 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1001 12:27:17.471380    1595 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-without-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
W1001 12:27:19.433105    1595 install.go:62] docker-machine-driver-hyperkit: exit status 1
W1001 12:27:19.433342    1595 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I1001 12:27:19.433386    1595 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate1966852818/001/docker-machine-driver-hyperkit
I1001 12:27:19.926612    1595 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate1966852818/001/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x106c52d40 0x106c52d40 0x106c52d40 0x106c52d40 0x106c52d40 0x106c52d40 0x106c52d40] Decompressors:map[bz2:0x14000739aa0 gz:0x14000739aa8 tar:0x14000739a50 tar.bz2:0x14000739a60 tar.gz:0x14000739a70 tar.xz:0x14000739a80 tar.zst:0x14000739a90 tbz2:0x14000739a60 tgz:0x14000739a70 txz:0x14000739a80 tzst:0x14000739a90 xz:0x14000739ab0 zip:0x14000739ac0 zst:0x14000739ab8] Getters:map[file:0x1400176ff00 http:0x1400059d770 https:0x1400059d7c0] Dir:false ProgressListener:<nil> Insecure:false DisableSy
mlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I1001 12:27:19.926725    1595 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperKitDriverInstallOrUpdate1966852818/001/docker-machine-driver-hyperkit
--- PASS: TestHyperKitDriverInstallOrUpdate (11.06s)

                                                
                                    
x
+
TestErrorSpam/setup (34.95s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-818000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-818000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-818000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-818000 --driver=qemu2 : (34.945651291s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.1."
--- PASS: TestErrorSpam/setup (34.95s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-818000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-818000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-818000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-818000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-818000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-818000 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.23s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-818000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-818000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-818000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-818000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-818000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-818000 status
--- PASS: TestErrorSpam/status (0.23s)

                                                
                                    
x
+
TestErrorSpam/pause (0.69s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-818000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-818000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-818000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-818000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-818000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-818000 pause
--- PASS: TestErrorSpam/pause (0.69s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.65s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-818000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-818000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-818000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-818000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-818000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-818000 unpause
--- PASS: TestErrorSpam/unpause (0.65s)

                                                
                                    
x
+
TestErrorSpam/stop (64.27s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-818000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-818000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-818000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-818000 stop: (12.201792958s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-818000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-818000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-818000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-818000 stop: (26.034350459s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-818000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-818000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-818000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-818000 stop: (26.033110834s)
--- PASS: TestErrorSpam/stop (64.27s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19736-1073/.minikube/files/etc/test/nested/copy/1595/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (77.12s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-755000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2234: (dbg) Done: out/minikube-darwin-arm64 start -p functional-755000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (1m17.114393291s)
--- PASS: TestFunctional/serial/StartWithProxy (77.12s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.1s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1001 12:04:12.719260    1595 config.go:182] Loaded profile config "functional-755000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-755000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-darwin-arm64 start -p functional-755000 --alsologtostderr -v=8: (38.099288625s)
functional_test.go:663: soft start took 38.099783625s for "functional-755000" cluster.
I1001 12:04:50.818604    1595 config.go:182] Loaded profile config "functional-755000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (38.10s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-755000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (9.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-755000 cache add registry.k8s.io/pause:3.1: (3.545440542s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-755000 cache add registry.k8s.io/pause:3.3: (3.443299125s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-755000 cache add registry.k8s.io/pause:latest: (2.34833525s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (9.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-755000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local227049076/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 cache add minikube-local-cache-test:functional-755000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 cache delete minikube-local-cache-test:functional-755000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-755000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-755000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (77.815625ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-darwin-arm64 -p functional-755000 cache reload: (1.961629959s)
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (2.28s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 kubectl -- --context functional-755000 get pods
functional_test.go:716: (dbg) Done: out/minikube-darwin-arm64 -p functional-755000 kubectl -- --context functional-755000 get pods: (2.278319417s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (2.28s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.02s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-755000 get pods
functional_test.go:741: (dbg) Done: out/kubectl --context functional-755000 get pods: (1.0188945s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.02s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.9s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-755000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1001 12:05:42.830867    1595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/addons-075000/client.crt: no such file or directory" logger="UnhandledError"
E1001 12:05:42.838460    1595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/addons-075000/client.crt: no such file or directory" logger="UnhandledError"
E1001 12:05:42.850401    1595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/addons-075000/client.crt: no such file or directory" logger="UnhandledError"
E1001 12:05:42.873773    1595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/addons-075000/client.crt: no such file or directory" logger="UnhandledError"
E1001 12:05:42.917129    1595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/addons-075000/client.crt: no such file or directory" logger="UnhandledError"
E1001 12:05:43.000525    1595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/addons-075000/client.crt: no such file or directory" logger="UnhandledError"
E1001 12:05:43.163880    1595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/addons-075000/client.crt: no such file or directory" logger="UnhandledError"
E1001 12:05:43.487315    1595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/addons-075000/client.crt: no such file or directory" logger="UnhandledError"
E1001 12:05:44.131056    1595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/addons-075000/client.crt: no such file or directory" logger="UnhandledError"
E1001 12:05:45.414637    1595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/addons-075000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-darwin-arm64 start -p functional-755000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.900228375s)
functional_test.go:761: restart took 38.900322917s for "functional-755000" cluster.
I1001 12:05:45.998514    1595 config.go:182] Loaded profile config "functional-755000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (38.90s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-755000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.65s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.65s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.6s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd2440406023/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.60s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.43s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-755000 apply -f testdata/invalidsvc.yaml
E1001 12:05:47.978247    1595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/addons-075000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-755000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-755000: exit status 115 (132.19775ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:31746 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-755000 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-755000 delete -f testdata/invalidsvc.yaml: (1.202853416s)
--- PASS: TestFunctional/serial/InvalidService (4.43s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-755000 config get cpus: exit status 14 (29.988625ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-755000 config get cpus: exit status 14 (33.666833ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-755000 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-755000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2711: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.96s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-755000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-755000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (137.254417ms)

                                                
                                                
-- stdout --
	* [functional-755000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:06:36.059447    2682 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:06:36.059563    2682 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:06:36.059566    2682 out.go:358] Setting ErrFile to fd 2...
	I1001 12:06:36.059569    2682 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:06:36.059685    2682 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:06:36.060697    2682 out.go:352] Setting JSON to false
	I1001 12:06:36.077754    2682 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2161,"bootTime":1727807435,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1001 12:06:36.077828    2682 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 12:06:36.082445    2682 out.go:177] * [functional-755000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 12:06:36.090489    2682 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 12:06:36.090533    2682 notify.go:220] Checking for updates...
	I1001 12:06:36.098469    2682 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	I1001 12:06:36.108437    2682 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 12:06:36.120436    2682 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 12:06:36.128460    2682 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	I1001 12:06:36.132492    2682 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 12:06:36.135829    2682 config.go:182] Loaded profile config "functional-755000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:06:36.136099    2682 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 12:06:36.140425    2682 out.go:177] * Using the qemu2 driver based on existing profile
	I1001 12:06:36.146453    2682 start.go:297] selected driver: qemu2
	I1001 12:06:36.146460    2682 start.go:901] validating driver "qemu2" against &{Name:functional-755000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:functional-755000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpira
tion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 12:06:36.146540    2682 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 12:06:36.152491    2682 out.go:201] 
	W1001 12:06:36.156474    2682 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1001 12:06:36.160449    2682 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-755000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-755000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-755000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (123.305209ms)

                                                
                                                
-- stdout --
	* [functional-755000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 12:06:35.932429    2674 out.go:345] Setting OutFile to fd 1 ...
	I1001 12:06:35.932539    2674 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:06:35.932542    2674 out.go:358] Setting ErrFile to fd 2...
	I1001 12:06:35.932545    2674 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 12:06:35.932671    2674 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
	I1001 12:06:35.934120    2674 out.go:352] Setting JSON to false
	I1001 12:06:35.952263    2674 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2160,"bootTime":1727807435,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W1001 12:06:35.952368    2674 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 12:06:35.957529    2674 out.go:177] * [functional-755000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	I1001 12:06:35.964481    2674 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 12:06:35.964565    2674 notify.go:220] Checking for updates...
	I1001 12:06:35.971413    2674 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	I1001 12:06:35.974517    2674 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 12:06:35.977512    2674 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 12:06:35.981459    2674 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	I1001 12:06:35.985521    2674 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 12:06:35.988798    2674 config.go:182] Loaded profile config "functional-755000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 12:06:35.989087    2674 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 12:06:35.993505    2674 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I1001 12:06:35.999412    2674 start.go:297] selected driver: qemu2
	I1001 12:06:35.999422    2674 start.go:901] validating driver "qemu2" against &{Name:functional-755000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:functional-755000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpira
tion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 12:06:35.999478    2674 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 12:06:36.006484    2674 out.go:201] 
	W1001 12:06:36.012500    2674 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1001 12:06:36.023532    2674 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [fd8002ce-0707-45df-a916-46489d2b6404] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.009087958s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-755000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-755000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-755000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-755000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ba458d51-73de-4643-9ca0-a2ca95d51e7c] Pending
helpers_test.go:344: "sp-pod" [ba458d51-73de-4643-9ca0-a2ca95d51e7c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ba458d51-73de-4643-9ca0-a2ca95d51e7c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.010450292s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-755000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-755000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-755000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [67e51550-7ad8-4818-8df6-e329ad8208b1] Pending
helpers_test.go:344: "sp-pod" [67e51550-7ad8-4818-8df6-e329ad8208b1] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [67e51550-7ad8-4818-8df6-e329ad8208b1] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.009610333s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-755000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.96s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 ssh -n functional-755000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 cp functional-755000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd4074812369/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 ssh -n functional-755000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 ssh -n functional-755000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1595/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 ssh "sudo cat /etc/test/nested/copy/1595/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1595.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 ssh "sudo cat /etc/ssl/certs/1595.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1595.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 ssh "sudo cat /usr/share/ca-certificates/1595.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/15952.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 ssh "sudo cat /etc/ssl/certs/15952.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/15952.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 ssh "sudo cat /usr/share/ca-certificates/15952.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-755000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-755000 ssh "sudo systemctl is-active crio": exit status 1 (66.997667ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
functional_test.go:2288: (dbg) Done: out/minikube-darwin-arm64 license: (1.337927291s)
--- PASS: TestFunctional/parallel/License (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-755000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-755000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-755000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2540: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-755000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-755000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-755000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [940d6cc6-5316-4a8f-bf2c-6ed764a6c0ff] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E1001 12:05:53.101759    1595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/addons-075000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "nginx-svc" [940d6cc6-5316-4a8f-bf2c-6ed764a6c0ff] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.003372833s
I1001 12:06:02.446037    1595 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-755000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.100.163.215 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
I1001 12:06:02.506087    1595 config.go:182] Loaded profile config "functional-755000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
I1001 12:06:02.554385    1595 config.go:182] Loaded profile config "functional-755000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-755000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-755000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-755000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-58hvf" [ed95bd6f-bf0b-4808-839a-f26beb65f6ce] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-58hvf" [ed95bd6f-bf0b-4808-839a-f26beb65f6ce] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.009365417s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 service list -o json
functional_test.go:1494: Took "290.944875ms" to run "out/minikube-darwin-arm64 -p functional-755000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.105.4:31317
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.105.4:31317
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "101.773542ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "33.802209ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "96.529542ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "33.110708ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-755000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port822019796/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1727809585578912000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port822019796/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1727809585578912000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port822019796/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1727809585578912000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port822019796/001/test-1727809585578912000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  1 19:06 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  1 19:06 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  1 19:06 test-1727809585578912000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 ssh cat /mount-9p/test-1727809585578912000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-755000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [3539f911-8fc5-4047-b105-bdd9567e63d8] Pending
helpers_test.go:344: "busybox-mount" [3539f911-8fc5-4047-b105-bdd9567e63d8] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [3539f911-8fc5-4047-b105-bdd9567e63d8] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [3539f911-8fc5-4047-b105-bdd9567e63d8] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 9.003070042s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-755000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-755000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port822019796/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-755000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port1283025291/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-755000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (62.578292ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1001 12:06:36.033687    1595 retry.go:31] will retry after 279.652182ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-755000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port1283025291/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-755000 ssh "sudo umount -f /mount-9p": exit status 1 (63.717333ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-755000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-755000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port1283025291/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-755000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3282122710/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-755000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3282122710/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-755000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3282122710/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-755000 ssh "findmnt -T" /mount1: exit status 1 (85.202875ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1001 12:06:36.834405    1595 retry.go:31] will retry after 711.18622ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-755000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-755000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3282122710/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-755000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3282122710/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-755000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3282122710/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 image ls --format short --alsologtostderr
2024/10/01 12:06:45 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-755000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-755000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-755000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-755000 image ls --format short --alsologtostderr:
I1001 12:06:45.021677    2864 out.go:345] Setting OutFile to fd 1 ...
I1001 12:06:45.024669    2864 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 12:06:45.024675    2864 out.go:358] Setting ErrFile to fd 2...
I1001 12:06:45.024678    2864 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 12:06:45.024891    2864 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
I1001 12:06:45.025321    2864 config.go:182] Loaded profile config "functional-755000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1001 12:06:45.025385    2864 config.go:182] Loaded profile config "functional-755000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1001 12:06:45.026220    2864 ssh_runner.go:195] Run: systemctl --version
I1001 12:06:45.026229    2864 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/functional-755000/id_rsa Username:docker}
I1001 12:06:45.054365    2864 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-755000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 279f381cb3736 | 85.9MB |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 24a140c548c07 | 94.7MB |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| docker.io/library/minikube-local-cache-test | functional-755000 | 2ce0277e632a5 | 30B    |
| docker.io/library/nginx                     | alpine            | b887aca7aed61 | 47MB   |
| docker.io/library/nginx                     | latest            | 6e8672ddd037e | 193MB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 7f8aa378bb47d | 66MB   |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| registry.k8s.io/coredns/coredns             | v1.11.3           | 2f6c962e7b831 | 60.2MB |
| docker.io/kicbase/echo-server               | functional-755000 | ce2d2cda2d858 | 4.78MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/kube-apiserver              | v1.31.1           | d3f53a98c0a9d | 91.6MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-755000 image ls --format table --alsologtostderr:
I1001 12:06:45.351210    2875 out.go:345] Setting OutFile to fd 1 ...
I1001 12:06:45.351326    2875 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 12:06:45.351329    2875 out.go:358] Setting ErrFile to fd 2...
I1001 12:06:45.351331    2875 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 12:06:45.351459    2875 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
I1001 12:06:45.351894    2875 config.go:182] Loaded profile config "functional-755000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1001 12:06:45.351954    2875 config.go:182] Loaded profile config "functional-755000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1001 12:06:45.352781    2875 ssh_runner.go:195] Run: systemctl --version
I1001 12:06:45.352790    2875 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/functional-755000/id_rsa Username:docker}
I1001 12:06:45.384585    2875 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-755000 image ls --format json --alsologtostderr:
[{"id":"b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-755000"],"size":"4780000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"66000000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"3d18732f8686cc3c878055d
99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"60200000"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"6e8672ddd037e6078cad0c819d331972e2a0c8e2aee506fcb94258c2536e4cf2","repoDigests":[],"repoTags":["docker.io/library/ngi
nx:latest"],"size":"193000000"},{"id":"d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"91600000"},{"id":"279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"85900000"},{"id":"24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"94700000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"2ce0277e632a5d630aed744f0f01879b26da4316b854372a397df803f2a719c1","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-755000
"],"size":"30"}]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-755000 image ls --format json --alsologtostderr:
I1001 12:06:45.269296    2873 out.go:345] Setting OutFile to fd 1 ...
I1001 12:06:45.269497    2873 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 12:06:45.269501    2873 out.go:358] Setting ErrFile to fd 2...
I1001 12:06:45.269503    2873 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 12:06:45.269652    2873 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
I1001 12:06:45.270131    2873 config.go:182] Loaded profile config "functional-755000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1001 12:06:45.270193    2873 config.go:182] Loaded profile config "functional-755000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1001 12:06:45.271078    2873 ssh_runner.go:195] Run: systemctl --version
I1001 12:06:45.271089    2873 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/functional-755000/id_rsa Username:docker}
I1001 12:06:45.301914    2873 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-755000 image ls --format yaml --alsologtostderr:
- id: d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "91600000"
- id: 24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "94700000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "66000000"
- id: 279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "85900000"
- id: b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-755000
size: "4780000"
- id: 2ce0277e632a5d630aed744f0f01879b26da4316b854372a397df803f2a719c1
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-755000
size: "30"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 6e8672ddd037e6078cad0c819d331972e2a0c8e2aee506fcb94258c2536e4cf2
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "60200000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-755000 image ls --format yaml --alsologtostderr:
I1001 12:06:45.097564    2867 out.go:345] Setting OutFile to fd 1 ...
I1001 12:06:45.097726    2867 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 12:06:45.097730    2867 out.go:358] Setting ErrFile to fd 2...
I1001 12:06:45.097732    2867 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 12:06:45.097848    2867 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
I1001 12:06:45.098278    2867 config.go:182] Loaded profile config "functional-755000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1001 12:06:45.098341    2867 config.go:182] Loaded profile config "functional-755000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1001 12:06:45.099166    2867 ssh_runner.go:195] Run: systemctl --version
I1001 12:06:45.099174    2867 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/functional-755000/id_rsa Username:docker}
I1001 12:06:45.126941    2867 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-755000 ssh pgrep buildkitd: exit status 1 (64.866917ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 image build -t localhost/my-image:functional-755000 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-darwin-arm64 -p functional-755000 image build -t localhost/my-image:functional-755000 testdata/build --alsologtostderr: (4.584195625s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-755000 image build -t localhost/my-image:functional-755000 testdata/build --alsologtostderr:
I1001 12:06:45.235670    2871 out.go:345] Setting OutFile to fd 1 ...
I1001 12:06:45.235877    2871 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 12:06:45.235880    2871 out.go:358] Setting ErrFile to fd 2...
I1001 12:06:45.235883    2871 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 12:06:45.236006    2871 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19736-1073/.minikube/bin
I1001 12:06:45.236436    2871 config.go:182] Loaded profile config "functional-755000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1001 12:06:45.237275    2871 config.go:182] Loaded profile config "functional-755000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1001 12:06:45.238153    2871 ssh_runner.go:195] Run: systemctl --version
I1001 12:06:45.238166    2871 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19736-1073/.minikube/machines/functional-755000/id_rsa Username:docker}
I1001 12:06:45.266884    2871 build_images.go:161] Building image from path: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.2028011292.tar
I1001 12:06:45.266939    2871 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1001 12:06:45.270737    2871 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2028011292.tar
I1001 12:06:45.272628    2871 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2028011292.tar: stat -c "%s %y" /var/lib/minikube/build/build.2028011292.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2028011292.tar': No such file or directory
I1001 12:06:45.272646    2871 ssh_runner.go:362] scp /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.2028011292.tar --> /var/lib/minikube/build/build.2028011292.tar (3072 bytes)
I1001 12:06:45.282591    2871 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2028011292
I1001 12:06:45.285976    2871 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2028011292 -xf /var/lib/minikube/build/build.2028011292.tar
I1001 12:06:45.289460    2871 docker.go:360] Building image: /var/lib/minikube/build/build.2028011292
I1001 12:06:45.289520    2871 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-755000 /var/lib/minikube/build/build.2028011292
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 2.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 1.5s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 1.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:769e2f0a2ebaffb74bcaeaa44264d219c61e46d4bad1d497c08a0365d457d872 done
#8 naming to localhost/my-image:functional-755000 done
#8 DONE 0.0s
I1001 12:06:49.750221    2871 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-755000 /var/lib/minikube/build/build.2028011292: (4.460714583s)
I1001 12:06:49.750285    2871 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2028011292
I1001 12:06:49.753783    2871 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2028011292.tar
I1001 12:06:49.757730    2871 build_images.go:217] Built localhost/my-image:functional-755000 from /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.2028011292.tar
I1001 12:06:49.757746    2871 build_images.go:133] succeeded building to: functional-755000
I1001 12:06:49.757748    2871 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.702541709s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-755000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 image load --daemon kicbase/echo-server:functional-755000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 image load --daemon kicbase/echo-server:functional-755000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-755000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 image load --daemon kicbase/echo-server:functional-755000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 image save kicbase/echo-server:functional-755000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 image rm kicbase/echo-server:functional-755000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-755000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 image save --daemon kicbase/echo-server:functional-755000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-755000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-755000 docker-env) && out/minikube-darwin-arm64 status -p functional-755000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-755000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-755000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-755000
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-755000
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-755000
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (233.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-268000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E1001 12:07:04.792189    1595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/addons-075000/client.crt: no such file or directory" logger="UnhandledError"
E1001 12:08:26.715176    1595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/addons-075000/client.crt: no such file or directory" logger="UnhandledError"
E1001 12:10:42.826940    1595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/addons-075000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-268000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (3m53.174385583s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (233.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (9.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-268000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-268000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-268000 -- rollout status deployment/busybox: (7.35980425s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-268000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-268000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-268000 -- exec busybox-7dff88458-pvjcl -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-268000 -- exec busybox-7dff88458-sg6rx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-268000 -- exec busybox-7dff88458-xdmlc -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-268000 -- exec busybox-7dff88458-pvjcl -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-268000 -- exec busybox-7dff88458-sg6rx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-268000 -- exec busybox-7dff88458-xdmlc -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-268000 -- exec busybox-7dff88458-pvjcl -- nslookup kubernetes.default.svc.cluster.local
E1001 12:10:52.178547    1595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/functional-755000/client.crt: no such file or directory" logger="UnhandledError"
E1001 12:10:52.186396    1595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/functional-755000/client.crt: no such file or directory" logger="UnhandledError"
E1001 12:10:52.197987    1595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/functional-755000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-268000 -- exec busybox-7dff88458-sg6rx -- nslookup kubernetes.default.svc.cluster.local
E1001 12:10:52.220446    1595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/functional-755000/client.crt: no such file or directory" logger="UnhandledError"
E1001 12:10:52.262267    1595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/functional-755000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-268000 -- exec busybox-7dff88458-xdmlc -- nslookup kubernetes.default.svc.cluster.local
E1001 12:10:52.345847    1595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/functional-755000/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/DeployApp (9.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-268000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-268000 -- exec busybox-7dff88458-pvjcl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
E1001 12:10:52.507793    1595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/functional-755000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-268000 -- exec busybox-7dff88458-pvjcl -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-268000 -- exec busybox-7dff88458-sg6rx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-268000 -- exec busybox-7dff88458-sg6rx -- sh -c "ping -c 1 192.168.105.1"
E1001 12:10:52.829812    1595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/functional-755000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-268000 -- exec busybox-7dff88458-xdmlc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-268000 -- exec busybox-7dff88458-xdmlc -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (69.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-268000 -v=7 --alsologtostderr
E1001 12:10:53.472727    1595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/functional-755000/client.crt: no such file or directory" logger="UnhandledError"
E1001 12:10:54.755430    1595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/functional-755000/client.crt: no such file or directory" logger="UnhandledError"
E1001 12:10:57.267846    1595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/functional-755000/client.crt: no such file or directory" logger="UnhandledError"
E1001 12:11:02.391167    1595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/functional-755000/client.crt: no such file or directory" logger="UnhandledError"
E1001 12:11:10.506435    1595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/addons-075000/client.crt: no such file or directory" logger="UnhandledError"
E1001 12:11:12.634374    1595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/functional-755000/client.crt: no such file or directory" logger="UnhandledError"
E1001 12:11:33.115743    1595 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19736-1073/.minikube/profiles/functional-755000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-268000 -v=7 --alsologtostderr: (1m8.824045084s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (69.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-268000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1.490650542s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (4.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 cp testdata/cp-test.txt ha-268000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 ssh -n ha-268000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 cp ha-268000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile3047172335/001/cp-test_ha-268000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 ssh -n ha-268000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 cp ha-268000:/home/docker/cp-test.txt ha-268000-m02:/home/docker/cp-test_ha-268000_ha-268000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 ssh -n ha-268000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 ssh -n ha-268000-m02 "sudo cat /home/docker/cp-test_ha-268000_ha-268000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 cp ha-268000:/home/docker/cp-test.txt ha-268000-m03:/home/docker/cp-test_ha-268000_ha-268000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 ssh -n ha-268000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 ssh -n ha-268000-m03 "sudo cat /home/docker/cp-test_ha-268000_ha-268000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 cp ha-268000:/home/docker/cp-test.txt ha-268000-m04:/home/docker/cp-test_ha-268000_ha-268000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 ssh -n ha-268000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 ssh -n ha-268000-m04 "sudo cat /home/docker/cp-test_ha-268000_ha-268000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 cp testdata/cp-test.txt ha-268000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 ssh -n ha-268000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 cp ha-268000-m02:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile3047172335/001/cp-test_ha-268000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 ssh -n ha-268000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 cp ha-268000-m02:/home/docker/cp-test.txt ha-268000:/home/docker/cp-test_ha-268000-m02_ha-268000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 ssh -n ha-268000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 ssh -n ha-268000 "sudo cat /home/docker/cp-test_ha-268000-m02_ha-268000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 cp ha-268000-m02:/home/docker/cp-test.txt ha-268000-m03:/home/docker/cp-test_ha-268000-m02_ha-268000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 ssh -n ha-268000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 ssh -n ha-268000-m03 "sudo cat /home/docker/cp-test_ha-268000-m02_ha-268000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 cp ha-268000-m02:/home/docker/cp-test.txt ha-268000-m04:/home/docker/cp-test_ha-268000-m02_ha-268000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 ssh -n ha-268000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 ssh -n ha-268000-m04 "sudo cat /home/docker/cp-test_ha-268000-m02_ha-268000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 cp testdata/cp-test.txt ha-268000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 ssh -n ha-268000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 cp ha-268000-m03:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile3047172335/001/cp-test_ha-268000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 ssh -n ha-268000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 cp ha-268000-m03:/home/docker/cp-test.txt ha-268000:/home/docker/cp-test_ha-268000-m03_ha-268000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 ssh -n ha-268000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 ssh -n ha-268000 "sudo cat /home/docker/cp-test_ha-268000-m03_ha-268000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 cp ha-268000-m03:/home/docker/cp-test.txt ha-268000-m02:/home/docker/cp-test_ha-268000-m03_ha-268000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 ssh -n ha-268000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 ssh -n ha-268000-m02 "sudo cat /home/docker/cp-test_ha-268000-m03_ha-268000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 cp ha-268000-m03:/home/docker/cp-test.txt ha-268000-m04:/home/docker/cp-test_ha-268000-m03_ha-268000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 ssh -n ha-268000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 ssh -n ha-268000-m04 "sudo cat /home/docker/cp-test_ha-268000-m03_ha-268000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 cp testdata/cp-test.txt ha-268000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 ssh -n ha-268000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 cp ha-268000-m04:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile3047172335/001/cp-test_ha-268000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 ssh -n ha-268000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 cp ha-268000-m04:/home/docker/cp-test.txt ha-268000:/home/docker/cp-test_ha-268000-m04_ha-268000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 ssh -n ha-268000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 ssh -n ha-268000 "sudo cat /home/docker/cp-test_ha-268000-m04_ha-268000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 cp ha-268000-m04:/home/docker/cp-test.txt ha-268000-m02:/home/docker/cp-test_ha-268000-m04_ha-268000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 ssh -n ha-268000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 ssh -n ha-268000-m02 "sudo cat /home/docker/cp-test_ha-268000-m04_ha-268000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 cp ha-268000-m04:/home/docker/cp-test.txt ha-268000-m03:/home/docker/cp-test_ha-268000-m04_ha-268000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 ssh -n ha-268000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-268000 ssh -n ha-268000-m03 "sudo cat /home/docker/cp-test_ha-268000-m04_ha-268000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (4.08s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (1.86s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-756000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-756000 --output=json --user=testUser: (1.858322916s)
--- PASS: TestJSONOutput/stop/Command (1.86s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-881000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-881000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (103.888208ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"026de504-d518-42dd-8999-16204be339de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-881000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4834e6f8-e369-4e5d-80c5-517b2b02fb63","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19736"}}
	{"specversion":"1.0","id":"939ebcea-3dcc-43dd-aafc-7482c950717a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig"}}
	{"specversion":"1.0","id":"d9f57aae-19cb-43a0-8cc0-72d5fe8ef1ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"54b86190-f90e-41c3-86b7-5796c957977d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7f0ecf69-4ea3-4a83-bc75-38f6b859a4a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube"}}
	{"specversion":"1.0","id":"23735db6-578d-4178-9f7f-59dfa67f47d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b5c6fe77-f39e-4274-8979-ed2fb8ce7757","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-881000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-881000
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (4.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (4.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-870000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-870000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (99.694958ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-870000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19736-1073/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19736-1073/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-870000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-870000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (41.958333ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-870000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-870000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.631176708s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.668274459s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-870000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-870000: (3.755003792s)
--- PASS: TestNoKubernetes/serial/Stop (3.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-870000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-870000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (37.0405ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-870000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-870000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.6s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-340000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-166000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-166000 --alsologtostderr -v=3: (3.468135333s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-166000 -n old-k8s-version-166000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-166000 -n old-k8s-version-166000: exit status 7 (44.815292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-166000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-877000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-877000 --alsologtostderr -v=3: (3.391182041s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-877000 -n no-preload-877000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-877000 -n no-preload-877000: exit status 7 (59.988833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-877000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-044000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-044000 --alsologtostderr -v=3: (2.996240416s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-044000 -n embed-certs-044000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-044000 -n embed-certs-044000: exit status 7 (55.845584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-044000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (1.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-402000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-402000 --alsologtostderr -v=3: (1.924693s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (1.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-402000 -n default-k8s-diff-port-402000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-402000 -n default-k8s-diff-port-402000: exit status 7 (57.183625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-402000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-200000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.57s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-200000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-200000 --alsologtostderr -v=3: (3.569767708s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.57s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-200000 -n newest-cni-200000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-200000 -n newest-cni-200000: exit status 7 (59.339375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-200000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (20/273)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-298000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-298000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-298000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-298000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-298000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-298000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-298000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-298000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-298000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-298000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-298000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-298000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-298000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-298000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-298000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-298000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-298000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-298000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-298000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-298000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-298000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-298000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-298000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-298000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-298000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-298000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-298000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-298000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-298000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-298000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-298000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-298000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-298000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-298000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-298000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-298000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-298000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-298000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-298000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-298000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-298000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-298000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-298000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-298000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-298000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-298000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-298000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-298000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-298000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-298000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-298000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-298000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-298000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-298000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-298000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-298000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-298000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-298000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-298000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-298000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-298000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-298000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-298000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-298000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-298000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-298000"

                                                
                                                
----------------------- debugLogs end: cilium-298000 [took: 2.172223834s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-298000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-298000
--- SKIP: TestNetworkPlugins/group/cilium (2.28s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-171000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-171000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.10s)

                                                
                                    
Copied to clipboard